The Bloomsbury Companion to Language Industry Studies 9781350024939, 9781350024960, 9781350024953

This volume provides a comprehensive overview of the key issues shaping the language industry, including translation, in

275 98 8MB

English Pages [419] Year 2020

Report DMCA / Copyright

DOWNLOAD FILE

Polecaj historie

The Bloomsbury Companion to Language Industry Studies
 9781350024939, 9781350024960, 9781350024953

Table of contents :
Title Page
Copyright Pape
Contents
Contributors
Foreword
Language industry studies?
Joining the dots
Visibility and viability
The T&I ecosystem
Notes
Reference
Chapter 1: Introduction
1. Defining and describing the language industry
2. Proliferation of the language industry
3. The ripple effect of proliferation
4. Researching the language industry
Notes
References
Chapter 2: Core research questions and methods
1. Planning research and generating research questions
2. Research methods in translation and interpreting
3. Data types, analysis and interpretation
4. Ethical considerations
5. Research challenges
6. Concluding remarks
Note
References
Chapter 3: Researching workplaces
1. Introduction
2. Focal points in research
3. Informing research through industry
4. Informing industry through research
5. Concluding remarks
References
Chapter 4: Translators’ roles and responsibilities
1. Introduction
2. Research focal points
3. Informing research through the industry
4. Informing the industry through research
5. Concluding remarks
Notes
References
Chapter 5: Interpreters’ roles and responsibilities
1. Introduction
2. Situated cognition as a joint paradigm for both conference and community interpreting
3. Research focal points
4. Informing research through the industry
5. Informing the industry through research
6. Concluding remarks
Note
References
Chapter 6: Non-professional interpreting and translation (NPIT)
1. Introduction
2. Research focal points
3. Informing research through the industry
4. Informing the industry through research
5. Concluding remarks
Notes
References
Chapter 7: Tailoring translation services for clients and users
1. Introduction
2. Research focal points
3. Informing research through the industry
4. Informing the industry through research
5. Concluding remarks
References
Chapter 8: Professional translator development from an expertise perspective
1. Introduction
2. Research focal points
3. Informing research through the language industry
4. Informing the industry through research
5. Concluding remarks
References
Chapter 9: Training and pedagogical implications
1. Introduction
2. Research focal points
3. Informing research through the industry
4. Informing the industry through research
5. Concluding remarks
Notes
References
Chapter 10: Audiovisual translation
1. Introduction
2. Research focal points
3. Informing research through the industry
4. Informing the industry through research
5. Concluding remarks
Notes
References
Chapter 11: Audiovisual media accessibility
1. Introduction
2. Research focal points
3. Informing research through the industry
4. Informing the industry through research
5. Concluding remarks
Acknowledgements
Notes
References
Chapter 12: Terminology management
1. Introduction
2. Research focal points
3. Informing research through the industry
4. Informing the industry through research
5. Concluding remarks
Notes
References
Chapter 13: Translation technology – past, present and future
1. Introduction
2. Focal points in research and development
3. Informing research through industry
4. Informing industry through research
5. Concluding remarks
Notes
References
Chapter 14: Machine translation: Where are we at today?
1. Introduction
2. The rise and fall of different MT paradigms
3. Is NMT the new state-of-the-art?
4. Informing research through the industry
5. Informing the industry through research
6. Concluding remarks
Acknowledgements
Notes
References
Chapter 15: Pre-editing and post-editing
1. Introduction
2. Research focal points
3. Informing research through the industry
4. Informing the industry through research
5. Concluding remarks
Funding
Notes
References
Chapter 16: Advances in interactive translation technology
1. Introduction
2. Research focal points
3. Informing research through the industry
4. Informing the industry through research
5. Concluding remarks
Notes
References
A–Z key terms and concepts
1. Adaptive expertise
2. Automatic evaluation metrics
3. Barrier-free
4. CAT tools
5. Competence
6. Controlled language
7. Deliberate practice
8. Ergonomics (cognitive, physical, organizational)
9. English as a lingua franca
10. Fansubbing and fandubbing
11. Human–computer interaction
12. Intercultural mediation
13. Journalation
14. Language brokering
15. Language service providers (LSP)
16. Localization
17. Machine learning
18. Multimodal translation
19. Neural MT
20. Observational methods
21. Product-oriented and process-oriented research
22. Professional ethics
23. Project managers
24. Respeaking
25. Routinized expertise
26. Rule-based MT
27. Self-concept
28. Statistical MT
29. Transcreation
30. Translation cycle
31. Usability
Notes
References
Index

Citation preview

The Bloomsbury Companion to Language Industry Studies

Bloomsbury Companions The Bloomsbury Companion to Cognitive Linguistics, edited by Jeannette Littlemore and John R. Taylor The Bloomsbury Companion to Contemporary Peircean Semiotics, edited by Tony Jappy The Bloomsbury Companion to Lexicography, edited by Howard Jackson The Bloomsbury Companion to M.A.K. Halliday, edited by Jonathan J. Webster The Bloomsbury Companion to Phonetics, edited by Mark J. Jones and Rachael-Anne Knight The Bloomsbury Companion to Stylistics, edited by Violeta Sotirova The Bloomsbury Companion to Syntax, edited by Silvia Luraghi and Claudia Parodi The Continuum Companion to Discourse Analysis, edited by Ken Hyland and Brian Platridge Available in Paperback as The Bloomsbury Companion to Discourse Studies The Continuum Companion to Historical Linguistics, edited by Silvia Luraghi and Vit Bubenik Available in Paperback as The Bloomsbury Companion to Historical Linguistics The Continuum Companion to Phonology, edited by Nancy C. Kula, Bert Botma and Kuniya Nasukawa Available in Paperback as The Bloomsbury Companion to Phonology The Continuum Companion to the Philosophy of Language, edited by Manuel García-Carpintero and Max Köbel Available in Paperback as The Bloomsbury Companion to the Philosophy of Language Continuum Companion to Second Language Acquisition, edited by Ernesto Macaro Available in Paperback as The Bloomsbury Companion to Second Language Acquisition

The Bloomsbury Companion to Language Industry Studies Edited by Erik Angelone, Maureen Ehrensberger-Dow and Gary Massey

BLOOMSBURY ACADEMIC Bloomsbury Publishing Plc 50 Bedford Square, London, WC1B 3DP, UK 1385 Broadway, New York, NY 10018, USA BLOOMSBURY, BLOOMSBURY ACADEMIC and the Diana logo are trademarks of Bloomsbury Publishing Plc First published in Great Britain 2020 Copyright © Erik Angelone, Maureen Ehrensberger-Dow, Gary Massey and Contributors, 2020 Erik Angelone, Maureen Ehrensberger-Dow and Gary Massey have asserted their right under the Copyright, Designs and Patents Act, 1988, to be identified as Editors of this work. Cover image: © Shutterstock All rights reserved. No part of this publication may be reproduced or transmitted in any form or by any means, electronic or mechanical, including photocopying, recording, or any information storage or retrieval system, without prior permission in writing from the publishers. Bloomsbury Publishing Plc does not have any control over, or responsibility for, any thirdparty websites referred to or in this book. All internet addresses given in this book were correct at the time of going to press. The author and publisher regret any inconvenience caused if addresses have changed or sites have ceased to exist, but can accept no responsibility for any such changes. A catalogue record for this book is available from the British Library. A catalog record for this book is available from the Library of Congress. ISBN: HB: 978-1-3500-2493-9 ePDF: 978-1-3500-2495-3 eBook: 978-1-3500-2494-6 Series: Bloomsbury Companions Typeset by Deanta Global Publishing services, Chennai, India To find out more about our authors and books visit www.bloomsbury.com and sign up for our newsletters.

Contents List of contributors Foreword  Henry Liu

vi viii

 1 Introduction Erik Angelone, Maureen Ehrensberger-Dow and Gary Massey 1   2 Core research questions and methods  Christopher D. Mellinger 15   3 Researching workplaces  Hanna Risku, Regina Rogl and Jelena Milošević 37   4 Translators’ roles and responsibilities  Christina Schäffner 63   5 Interpreters’ roles and responsibilities  Michaela Albl-Mikasa 91   6 Non-professional interpreting and translation (NPIT)  Claudia Angelelli 115   7 Tailoring translation services for clients and users  Kaisa Koskinen 139   8 Professional translator development from an expertise perspective  Gregory M. Shreve 153   9 Training and pedagogical implications  Catherine Way 179 10 Audiovisual translation  Jorge Díaz-Cintas 209 11 Audiovisual media accessibility  Anna Jankowska 231 12 Terminology management  Lynne Bowker 261 13 Translation technology – past, present and future  Jaap van der Meer 285 14 Machine translation: Where are we at today?  Andy Way 311 15 Pre-editing and post-editing  Ana Guerberof Arenas 333 16 Advances in interactive translation technology  Michael Carl and Emmanuel Planas 361 A–Z key terms and concepts  Erik Angelone, Maureen Ehrensberger-Dow and Gary Massey 387 Index

403

Contributors Michaela Albl-Mikasa Professor of Interpreting Studies ZHAW Zurich University of Applied Sciences Switzerland Claudia Angelelli Chair of Multilingualism and Communication Heriot-Watt University UK Erik Angelone Associate Professor of Translation Studies Kent State University USA Lynne Bowker Professor of Translation and Information Studies University of Ottawa Canada Michael Carl Professor and Director of the Center for Research and Innovation in Translation and Translation Technology (CRITT) Kent State University USA Jorge Díaz-Cintas Professor of Translation Studies University College London UK

Maureen Ehrensberger-Dow Professor of Translation Studies ZHAW Zurich University of Applied Sciences Switzerland Ana Guerberof Arenas Postdoctoral Researcher in Machine Translation Dublin City University Ireland Anna Jankowska Postdoctoral Researcher at the Chair for Translation Studies and Intercultural Communication Jagiellonian University Poland Kaisa Koskinen Professor of Translation Studies Tampere University Finland Henry Liu Lifetime Honorary Advisor to FIT and 13th FIT President (2014-2017) FIT (International Federation of Translators) France Gary Massey Professor of Translation Studies and Director of the Institute of Translation and Interpreting ZHAW Zurich University of Applied Sciences Switzerland

Contributors

Christopher D. Mellinger Assistant Professor of Spanish Interpreting and Translation Studies University of North Carolina at Charlotte USA Jelena Milošević Graduate Research and Teaching Assistant University of Vienna Austria Emmanuel Planas Acting-Director of the Global Liaison Office National Institute of Informatics Japan Hanna Risku Professor of Translation Studies University of Vienna Austria Regina Rogl Graduate Research and Teaching Assistant University of Vienna Austria

vii

Christina Schäffner Emeritus Professor of Translation Studies Aston University UK Gregory M. Shreve Emeritus Professor of Translation Studies Kent State University USA Jaap Van der Meer Director TAUS (Translation Automation User Society) The Netherlands Andy Way Professor in Computing Dublin City University Ireland Catherine Way Associate Professor of Translation University of Granada Spain

Foreword Language industry studies? Why study the language industry? Beyond all the survey results and estimates, there is something more important and motivating than the effect of headlinegrabbing facts and figures. In the age of globalization, it is important to have one’s message heard, widely and convincingly. Among other modes and vehicles, messages are conveyed by words, written and spoken, mostly in digital form. This requires governments, corporations and institutions to invest resources in multilingualism. It also means that the need for language services is suddenly beyond what individual local translators or interpreters can meet. History seems to be repeating itself as the language service sector has developed by traversing the well-trodden path of ‘industrialization’, from local artisans to mass production and outsourcing. Yet, the professional associations and research and teaching institutions appear to have continued to exist in almost parallel universes. Ninety-nine per cent of translation is estimated today to be mediated by machines, with Google alone translating 143 billion words a day. In their respective chapters, Andy Way and Jaap van der Meer highlight how the need for language services has outstripped the capacities of the profession and the industry. The same holds true for interpreting, audiovisual translation and the wider accessibility sector explored in the chapters penned by Claudia Angelelli, Jorge Díaz-Cintas and Anna Jankowska. And this is just for spoken languages. Sign languages are undergoing similar transitions, as witnessed by Jemina Napier and Lorraine Leeson’s (2016) excellent tome, Sign Language in Action. This translates into a simple but cruel reality – even with 100,000 members around the world in over 60 countries and the largest grouping of professional associations in the world, FIT remains a niche in the language service landscape. The same holds true for the vast majority of research findings generated by translation, interpreting and terminology studies. Wider society does not wait for rigorous research or relevant data to become available. The industry’s disconnect with, and alienation of, the profession and the research community has at times translated into an almost protectionist attitude towards both it and the many

Foreword

ix

non-professional translation and interpreting (T&I) practitioners, not dissimilar to the reactions of the disenfranchised populous in the wider Western world (see, for instance, the recent issue on multilingualism in the Cultus Journal).1

Joining the dots This has culminated in professional translators and, to a lesser extent, interpreters and terminologists exhibiting a combination of fear and antagonism towards language service providers (LSPs), albeit after deriving some of the initial advantages of consolidation. Whereas some benefitted from terminology management in CAT tools, there was arguably an immediate antipathy to machine translation. But with the wider context firmly in mind, I, like the editors of this companion, ventured into terra nova by building links between the industry, the profession and academia. I was, for instance, the first FIT President to be invited to and to speak at a Globalization and Localization Association (GALA) conference, at the International Annual Meeting on Computer-assisted Translation and Terminology, formerly known as the Joint Inter-agency Meeting on Computer-Assisted Translation and Terminology (JIAMCATT) and at the Association internationale pour la promotion des technologies Linguistiques (AsLing). In a similar vein, two of the editors of this companion hosted the 3rd International Conference on Non-professional Interpreting and Translation (NPIT3) in Winterthur in 2015 to reach out to the vast domain beyond the profession, strengthening the dialogue between non-professionals and academics. At the same time, Anthony Pym and I were motivated to reduce the distance between practitioners and academics across a number of fora as (Antipodean) presidents of European Society for Translation Studies (EST) and FIT, respectively. By attempting to bring the industry and academics closer together, this volume sets out to join the dots.

Visibility and viability Will industry stakeholders and the wider community of practice read this volume? Will research inform the industry? As boldly opined at the 8th EST Congress in Aarhus (2016), it is critical that research findings are relevant, visible and have an impact upon the bottom line of industrial players, which,

x

Foreword

I must emphasize, now face the same downward pressure on rates that individual artisanal practitioners have been confronted with. Much more important is the fact that the viability of translation studies and of translator, interpreter and terminologist training is intricately related to the viability of the wider profession and the language service sector. For a number of years now, there has been a steady stream of eager students enrolling in T&I programmes around the world, despite some actual and potential threats to income, employment opportunities and job security. The pressure is now squarely on training institutions in the age of neural machine translation (NMT). While it is relatively realistic to expect an experienced translator to become efficient in post-editing machine translation output, it is a totally different paradigm, touched on in the chapters by Gregory Shreve and Catherine Way, to train someone without experience to traverse skill sets and fulfil new roles when inbuilt efficiency discounts mean they are increasingly poorly remunerated per word or line. Such a downward spiral could also lead to the degradation of corpora and the reliability of terminology databases (see Lynne Bowker’s chapter), which are the pillars of automation. To paraphrase the Second Law of Thermodynamics, it is always much harder and more expensive to clean up than to stop the degradation at the source.

The T&I ecosystem Like any ecosystem, our profession and industry are subject to various, seemingly independent and often competing forces in a highly complex system. However, what is clear is that a sustainable and even prosperous language industry translates into vibrant academic and research environments. So, how can we achieve a sustainable language industry? We need three ingredients: (1) an engaged professional workforce, (2) a market that demands and pays for professional services and (3) an accountable, balanced, enforceable and fair regulatory environment. The chapter penned by Hanna Risku, Regina Rogl and Jelena Milošević addresses the work environment and that by Michael Carl and Emmanuel Planas focuses on the interface environment. Both are centred on the practitioners – the first ingredient. Kaisa Koskinen addresses the second ingredient by asking whom we are translating for. Here, we recognize that the protagonist of this volume, the language industry, remains predominantly headquartered in the West and is focused on a market constituted by a very small number of commonly used

Foreword

xi

languages – which I personally refer to as ‘pseudo-multilingualism’. Projects like the Listening Zones of NGOs2 led by Hilary Footitt highlight the wide gap between clients’ needs and what is available. The third ingredient lies beyond the scope of the language service industry itself and therefore of this volume but deserves to be considered by institutions, professional organizations and industry stakeholders. I have been privileged to be involved in many world firsts in my career thus far. Yet few have given me as much pleasure as being asked to pen this short note to this world’s first, and first-class, companion of its sort. I would therefore like to conclude by congratulating the editorial team, all of whom have taught at one of the leaders in innovative research and training – the ZHAW’s Institute of Translation and Interpreting. Like FIT, they have brought together a who’s who in the research domain of spoken-language translation, interpreting, terminology and technology, transcending boundaries in what is a truly multidisciplinary volume. This companion not only lays a most solid foundation for this new and long overdue, reality-focused discipline; it also paints a vast canvas, giving practitioners, researchers, policymakers and advocates like myself a roadmap for the future. This trusted companion will accompany me wherever I am invited to speak. Henry Liu Lifetime Honorary Advisor to FIT 13th FIT President (2014-17) Fellow of NZSTI Paris and Auckland, 2019

Notes 1 http://www.cultusjournal.com/index.php/archive/22-issue-2017-volume-10 2 http://www.reading.ac.uk/modern-languages-and-european-studies/Research/mleslistening-zones-of-ngos.aspx

Reference Napier, J. and L. Leeson (2016), Sign Language in Action, Houndmills/Basingstoke: Palgrave MacMillan.

xii

1

Introduction Erik Angelone, Maureen Ehrensberger-Dow and Gary Massey

1.  Defining and describing the language industry Today’s language industry is driven by an expanding range of branches that all share some facet of multilingual communication as a common thread. GALA, the Globalization and Localization Association, makes use of the GILT acronym in defining the language industry as one that encompasses the following key services: globalization (G11n), internationalization (I18n), localization (L10n) and translation (T9n).1 In this volume, we embrace this definition, while also regarding interpreting, consulting, project management and tool design as fundamental pillars. In addition to these services, the language industry can also be defined and described from the perspective of the various stakeholders involved in the day-to-day operations of project lifecycles. These include language service providers (LSPs), vendors, project managers, terminologists, translators, interpreters, revisers, quality assurance specialists and consultants. This is by no means an exhaustive list. The language industry consists of many moving parts. The constellation of these parts tends to vary from one task to the next, which makes sound project management all the more important. Indeed, the type and scope of projects we see in the language industry on a daily basis are manifold, with varying degrees of logistical complexity. Perhaps a given company plans on launching its operations internationally, in which case large-scale translation and localization services would be at the fore, as would the work of terminologists to ensure consistency. A second type of language industry project might take place in the context of some sort of emergency or crisis situation involving low-resource language pairs. In such an event, realtime cross-language and cross-cultural communication, be it in the form of interpreting services or machine-translation output, is of utmost importance. Yet other projects might call for multilanguage text production from the start,

2

The Bloomsbury Companion to Language Industry Studies

in which case translation and interpreting services would not be implemented as an add-on. Instead, language industry professionals would serve in a capacity of content planning and co-creation. This is where language and cross-cultural consulting plays a pivotal role. Needless to say, the working conditions and environments of language industry professionals (and non-professionals) are far from uniform. In addition to a continued need for routinized experts, who work consistently within a given language industry niche, there is a growing need for adaptive experts, who wear multiple hats by taking on varying roles across niches (see Chapter 8, this volume).

2.  Proliferation of the language industry Language industry activity, as explored in this volume, has a direct or indirect tie-in with translation and/or interpreting as foundational pillars. The manner in which both translation and interpreting are conceptualized and practised highlight an ongoing sea change. Globalization, technological advancement and big data have radically altered the language industry landscape, resulting in a seemingly perpetual state of both flux and growth. For example, in 2017 alone, 265 new LSPs launched operations in the United Kingdom, yielding a total of close to 2,000 at the time of writing (Bond 2018a). According to the U.S. Department of Labor Occupational Outlook Handbook, the job outlook for interpreters and translators is expected to grow by 18 per cent from 2016 through 2026, which is significantly higher than the average of 7 per cent for all occupations in the aggregate.2 We are also witnessing a tremendous proliferation of job titles in the language industry, and in the corresponding roles and responsibilities of language industry professionals (see Chapters 4, 5 and 6, this volume). A recent Slator study on language industry job titles appearing on the LinkedIn networking platform revealed upwards of 600 unique titles (Bond 2018b). Many of these titles, such as specialized translator, conference interpreter, localizer, post-editor and reviser, have been firmly in place for quite some time as established career paths in the language industry. Others, such as transcreator, quality assurance engineer, multilanguage UX designer and strategy consultant, are relatively new on the scene. This plethora of career fields has a direct impact on how we can best go about preparing students for future careers as successful language industry professionals (see Chapter 9, this volume), and such career profiles, though at times seemingly vague and redundant, should serve as a litmus for

Introduction

3

evaluating curricula. Interestingly, situations involving crisis translation and interpreting services (Federici 2016; O’Brien et al. 2018), a growing need for mediation involving low-resource languages and a shortage of professionally trained specialists have given rise to the somewhat perplex job profile of nonprofessional interpreter and translator (see Chapter 6, this volume). This proliferation is in direct correlation with the now more widespread incorporation of machine translation in project workflows (see Chapter 14, this volume), and is in response to both its success and its shortcomings. Indeed, advances in the efficacy of machine translation and artificial intelligence are shaping the language industry at large to an unprecedented extent that extends well beyond the boundaries of written translation. Neural machine translation, in particular, is now widely regarded as game-changing, and artificial intelligence, according to the now seminal New York Times article, as a ‘great awakening’ (Lewis-Kraus 2016). Technological advancement (see Chapter 13, this volume) has changed the landscape in terms of workflow processes and settings (see Chapter 16, this volume), quality and productivity as performance metrics, and the very agents involved in content creation (see Chapter 3, this volume).

3.  The ripple effect of proliferation The increasing integration of machine translation into the workflow of translation projects has resulted in an era of augmented translation, which taps into, and ideally optimizes, the complimentary skill sets of human translators and MT engines.3 Adaptive machine translation is now a common feature in many CAT tool interfaces, to the extent that post-editing of content already in the target language has made ‘from-scratch’ translation increasingly obsolete. Given this paradigm shift in the direction of post-editing of content generated from multiple sources, Pym advocates for the use of the term ‘start text’ in place of the now more dated term ‘source text’ (2013). In recent years, computer-aided interpreting (CAI) (Fantinuouli 2017) has garnered attention as augmentation for the human interpreter. Not unlike interactive translation platforms based on an array of cognitive process indicators (see Chapter 11, this volume), CAI, ideally, only activates in instances where the interpreter seems to be struggling, as rendered, for example, through extensive pauses and a variety speech disfluencies. The verdict is still out on the extent to which such applications enhance or disrupt the work of interpreters and translators, and more empirical research along these lines in greatly needed.

4

The Bloomsbury Companion to Language Industry Studies

As far as translators are concerned, Common Sense Advisory predicts a trend in which translation memory systems will become more streamlined in line with actual usage patterns (Sargent 2018). Usability studies would be conducive in empirically documenting an optimal interface and constellation of features, as opposed to one that is full of unused or misplaced features that run the risk of triggering cognitive friction (Ehrensberger-Dow and O’Brien 2015). More robust artificial intelligence applications are also predicted to make project workflows more agile from a project management perspective (Pielmeier 2017), where some level of automation can optimize the components of project intake, vendor management, production and post-processing. Empirical research on the impact of machine translation and artificial intelligence on translation and interpreting is still in its infancy. This is even more the case when it comes to research on its impact in domains such as terminology management (see Chapter 12, this volume), pre- and post-editing (see Chapter 15, this volume) and project management. The integration of machine translation and artificial intelligence has also ushered in change as far as metrics, assessment and expectations of quality are concerned. It goes without saying that quality is not, and has never been, one dimensional in a one-size-fits-all fashion. Machine translation has made this all the more evident. The automation it enables (with greater or lesser success) has resulted in productivity emerging as a core metric of quality, and the notion of a ‘good enough’ translation has gained visibility. In definitions of fit-for-purpose translation, perishability of content has become an important parameter, alongside parameters such as audience and use. The 2015 implementation of ISO 17100,4 with its emphasis on a mandatory stage of revision, can be regarded as a further attempt to ensure quality of MT (and HT) output. The onus shifts from the translator to the reviser(s) (see Chapter 15, this volume), giving way to different constellations of quality assurance within the lifecycle of the translation project, potentially including iterative involvement of end users and clients (see Chapter 7, this volume). In an attempt to assess the quality of both MT and human translation from both process and product perspectives, based on both internal and external metrics, Translation Automation User Society (TAUS) has introduced an automated means by which to gauge performance with its Quality Dashboard.5 This, too, introduces new avenues of language industry research not only on how quality can be assessed from a benchmark standpoint, but also on how various industry stakeholders make use of such resources. Finally, proliferation within the language industry has brought about a shift of focus in the direction of audiences that have often been largely overlooked

Introduction

5

in the past. For example, media accessibility (see Chapter 11, this volume) has generated significant interest as of late, as both an industry practice and research focal point. Emerging fields, such as audio description, respeaking and speechto-text reporting,6 illustrate the momentum of barrier-free communication in today’s language industry. Web Content Accessibility Guidelines (WCAG) 2.0,7 along with the implementation of various national accessibility standards, such as the recently updated Electronic and Information Accessibility Standards8 in the United States, have catalysed work in this area. The efforts of non-profit organizations, such as Translators without Borders (TWB),9 have brought about positive change for audiences in low-resource languages. For example, in its 2017 annual report, TWB provides a snapshot of its efforts in response to the European refugee crisis. Thanks to the work of its largely volunteer team, TWB generated more than 800,000 words of content in Arabic, Kurdish, Urdu, Pashto and Greek, which was disseminated through mobile apps, interpreters and paper signs.10 These efforts, along with those of others on a daily basis in situations involving crises of various kinds, highlight the important role of so-called non-professional interpreters and translators (see Chapter 6, this volume) as change agents in efforts to ensure societal well-being.

4.  Researching the language industry As suggested by this volume’s title, The Bloomsbury Companion to Language Industry Studies, we are advocating for a deliberate broadening of scope beyond translation and interpreting studies in research endeavours pertaining to today’s multifaceted language industry. We regard language industry studies as a more appropriate term for describing activity in some of the sub-fields mentioned in this introductory chapter, which tend to be regarded as peripheral from a strictly translation studies lens. This volume takes a decidedly broad-brush approach to defining and describing the language industry. It represents an initial attempt to bring together various strands of research (see Chapter 2, this volume) on various fields under one roof, which we are calling language industry studies. Each chapter establishes mutually beneficial synergies between academic research, both present and future, and industry realities. Core research questions and methods appropriate for language industry studies are presented and explained by Chris Mellinger in Chapter 2. He discusses the importance of understanding the role of stakeholders, transdisciplinarity and the fragmented nature of the industry before embarking on research in

6

The Bloomsbury Companion to Language Industry Studies

this area. The advantages and limitations of experimental, quasi-experimental and observational methods are outlined, with examples from industry research provided as illustration. Since for logistical reasons truly controlled experimental research is relatively rare in the field, Mellinger cautions about making overly strong claims on the basis of other types of studies. The section on data types, analysis and interpretation, and especially the section on ethical considerations, are useful overviews both for newcomers to the area of transdisciplinary and field research and for more experienced researchers. Mellinger is realistic about the challenges of doing language industry research, but optimistic that involvement with industry stakeholders is worth the effort. The three main challenges, in his view, are defining the scope of the industry, gaining access to data and generalizing from results. Critical reflection of methodology and careful choices in the specific areas of research in the language industry, as described in the following chapters, help address these and other challenges. In Chapter 3, Hanna Risku, Regina Rogl and Jelena Milošević explore research methods, tools and approaches as they apply to language industry workplace studies, a focal point that is still in its infancy. In broad terms, workplace research involves documentation of language industry professionals in their natural environments while engaged in authentic tasks. Many of the fundamental directions in which workplace research can go, as Risku and her colleagues delineate, are theoretically informed by cognitive, physical and organizational ergonomics, on the one hand, and cultural anthropology, on the other. Extending beyond the ‘black box’ metaphor at the heart of cognitive process research, workplace studies shed valuable light on the performance of multiple language industry stakeholders in terms of the artefacts, resources and people with whom they interact. The authors argue for more prolific workplace research on language industry agents and actants as a cognitive ecosystem using participant observation, interviews and surveys as research instruments of choice. Zeroing in on the actors in the centre of the translation process, Christina Schäffner discusses translators’ roles and responsibilities in Chapter 4. She draws a contrast between how this topic has been addressed in academic publications and in professional codes of conduct. The differences between the workflow processes for staff translators and freelancers are outlined with respect to their respective roles and responsibilities. Schäffner reflects on how emerging practices and new labels for language-related work impact on roles, responsibilities and status, with a detailed consideration of current competence models. She provides concrete examples of how academic research has been informed by industry and vice versa. She concludes that findings about translator agency in various settings

Introduction

7

can contribute to insights into the role of translators in addressing problems of societal relevance. According to Schäffner, this type of research is best done in transdisciplinary projects that involve academics and representatives from various sectors in addition to the translation industry. In Chapter 5, Michaela Albl-Mikasa addresses a similar topic from another perspective. The understanding of interpreters’ roles has typically been very different, depending on whether conference or community interpreting is being discussed. She explains how the conduit model has been the focus for the former and agency for latter, arguing that a cognitive–constructivist framework provides a possibility to consider the commonalities of the two types of situated cognitive activity instead of focusing on their superficial differences. Albl-Mikasa presents her own model of dialogue interpreting, highlighting the central position of the interpreter as well as the various interdependencies that contribute to interpreting competence and performance. She reflects on research related to interpreters’ roles and responsibilities, focusing on the shifts in traditional notions towards an emphasis on the dynamic and collaborative nature of discourse. She suggests that academic researchers would do well to be informed by professional codes of ethics and standards of good practice. With respect to industry being informed by research, Albl-Mikasa identifies new technologies and the increasing use of English as a lingua franca as key areas in which insights could help anticipate interpreting needs and solutions. In Chapter 6, Claudia Angelelli expands on the roles of translators and interpreters from the perspective of the growing need for language service provision involving non-professional interpreting and translation. She advocates for more extensive empirical research to document the workplaces, competences and training needs of NPITs vis-à-vis those of trained professionals. Based on the growing need for NPITs, particularly in situations involving crises and emergency situations, she calls for the language industry and academia to collaborate so that NPIT training can be informed by working realities. Angelelli highlights the empirical documentation of NPIT workflows, tools and performance assessment as paramount focal points for such collaboration. In any event, given the proliferation of NPITs in the language industry, as also seen in contexts involving crowdsourcing and volunteer translation, she calls for the abandonment of conceptualizations that frame NPIT as second-tier. In anticipation of a growing diversification of service provision and client needs, Kaisa Koskinen outlines client-oriented tailorization in Chapter 7. She envisions technological advancement as catalysing diversification, with big-datadriven, heavily automated provision at one end and little data-driven, highly

8

The Bloomsbury Companion to Language Industry Studies

client-customized provision at the other end of a service spectrum. Not unlike user-centred translation, Koskinen sees client-oriented tailorization and iterative content development as a value proposition response to fully automated 24/7 translation services which promise a fast turnaround. She does not discount such automation, but rather sees it as one of several options to be presented by LSPs to clients as part of tailorized service management. New forms of provision, such as transcreation, transediting and journalation, attest to a proliferation of the spectrum of services provided. Koskinen raises the thought-provoking question of whether or not translation will ultimately denote only machine translation as some of the labels used for these new forms of provision gain firmer footing in both the language industry and academia. In Chapter 8, Gregory Shreve takes up the still largely elusive notion of expertise in the language industry, with a focus on how industry stakeholders might go about documenting, assessing and developing it. Building on the central tenets of deliberate practice as a requisite for expertise acquisition, he emphasizes the importance of establishing performance assessment metrics that examine not just products, which is so often the case, but also processes and other underexplored attributes such as efficacy of social interaction. Shreve points out that much of the research on expertise in the language industry to date focuses on manifestations of the cognitive effort exhibited by translators and interpreters when involved in routinized tasks. Given the dynamic nature of the language industry, where roles have become more fluid, Shreve encourages the industry and academia to collaborate on research endeavours that explore the attributes and dynamics of adaptive and team expertise. Building competence and expertise is the topic of Chapter 9, in which Catherine Way considers the pedagogical implications of providing trained translators to meet industry requirements. Considering the way traditional concepts of translator education and training have evolved into the recent focus on language service provision, she addresses the dilemma facing higher education as it navigates between preparing ‘critical, responsible, creative’ citizens and training students to meet immediate industry demands. In the current ‘identity crisis’ of a profession and its educators as they straddle traditional views of translation and current industry demands, she suggests that translator education can bridge the divide by providing the translators with the metacognitive critical thinking to continue developing their expertise and re-cast themselves in the role of ‘intercultural, interlingual information brokers and consultants’. Despite the flourishing body of research on translator training and education that she presents, ranging from social constructivist approaches,

Introduction

9

multicomponential competence modelling and process-oriented training to experiential collaborative learning that simulates industry practices, she points out that it has rarely informed, or been informed by, industry. Translator-training institutions should therefore adopt a more structured approach to networking with industry to learn more about its needs and to provide adequate continuing professional development for its trainers. Training also features in Chapter 10, where Jorge Díaz-Cintas examines the professional activity and academic discipline of audiovisual translation (AVT). AVT occupies an important and highly visible position in our technologized, multimedia society and is a source of strongly growing demand in the language industry, but a shortage of professional translators, exacerbated by a lack of formal AVT training in many countries, is a major challenge faced by the industry. In hand with the increasing market demand for AVT practices and products, including voiceover, dubbing, interpreting and audio description, as well as various forms of subtitling and surtitling, AVT has developed into a ‘dynamic, vibrant and mature field of research’. That research has been characterized by evolving synergies and ‘healthy dynamics’ between academia and the industry as it has moved from descriptive studies of the translated product to exploring professional environments, investigating labour dynamics, applying automated solutions and CAT tools to AVT practices and, most recently, audience reception. There has also been an increasing amount of work done on non-professional AVT in the form of innovative fansubbing, crowd subtitling and fandubbing. The actual and potential benefits of AVT research on academia, practitioners, the industry at large and education are legion, and Díaz-Cintas concludes that the recent stress on understanding how AVT impacts on its audience is set to create even closer links and common interests between academia and the language industry. In Chapter 11, Anna Jankowska focuses on audiovisual media accessibility, which, from the perspective of translation studies (TS), can be regarded as a subcategory or an aspect of AVT. Proceeding from medical and social disability models, she presents an overview of content-based and technology-based accessibility provision services as well as those providing access to specific environments and media, considering both the primary and the secondary audiences for which they cater. In this chapter, she pays particular attention to the mutually beneficial exchange between research and industry, demonstrating how the industry has created impetus for research trends and for the methods that researchers have been deploying as their work has evolved from early descriptive studies into experimental approaches. At the same time, she shows how what

10

The Bloomsbury Companion to Language Industry Studies

she terms the ‘pragmatic humanities’ has developed the potential to supply industry with practical, implementable solutions. She joins other researchers (Romero-Fresco 2018; Greco 2018) in calling for a new interdisciplinary field of ‘accessibility studies’, which would overcome the limitations of approaching audiovisual media accessibility from just one expert perspective (AVT, engineering, etc.) by integrating the numerous interlacing aspects of media accessibility in order to develop and provide quality access services. There has always been both overt and tacit acknowledgement of the close ties that exist between terminology and translation, though terminology work and management frequently lack visibility and are difficult to disentwine from translation work as a whole. In Chapter 12, Lynne Bowker contends that the value of terminology management can be rendered more transparent by establishing a stronger dialogue between academic research and the language industry. Of particular interest in this regard are corpus-based approaches, automatic term extraction, terminological variation, knowledge patterns and knowledge-rich contexts. In turn, research can draw fruitfully on industry practices in exploring areas such as the expanded communities of practice that crowdsourcing represents, a broadened notion of ‘useful’ content in term records and the spreading concept of ‘User eXperience’. She hopes that her chapter, containing as it does some key examples of how academic research and industrial practice can mutually feed off and into each other, will help encourage researchers and practitioners to improve mutual understanding and to move closer to developing a virtuous cycle that benefits both communities. Speaking very much from the perspective of someone who has been active in the language industry for decades, Jaap van der Meer reviews developments in translation technology in Chapter 13 and makes predictions of what we can expect down the line. He starts by reminding us of the beginnings of the industry as we know it, including the impact of personal computing not only on translators’ workplaces, but also on the demand for language services such as translation and localization. He traces advances in technology through to the current stage of convergence in which translation is predicted to soon be available everywhere at any time. He explains the major categories of translation technology in accessible detail before outlining how the impact of current developments in the industry deserves to be the subject of research and reflection. He closes the chapter by considering some trends in academic research that are just beginning to be taken up in the language industry as we head towards a future vision of instantaneous multilingual communication.

Introduction

11

In Chapter 14, Andy Way focuses on the seemingly non-stop developments in the area of machine translation that some people claim will lead to bridging the gap between languages. Although the subtitle of the chapter is ‘where are we at today?’, Way also looks backwards and provides the reader with an overview of the progress that has brought us from rule-based MT to the current state of play with neural machine translation (NMT). He raises the question of whether NMT is the new state-of-the-art and explains why evaluation metrics need to change. The pressure from industry for research and development in the area of MT has resulted in the paradoxical situation of difficulties finding enough academic staff to train the developers needed. Way argues that more intense cooperation between industry and academe is both desirable and foreseeable, especially to deal with real-world problems and real users’ needs. He makes it clear that improved MT output will simply make translators more productive and not remove them from the loop. Chapter 15 sets out to provide an accessible introduction to two important parts of the translation loop that involves MT, namely pre-editing and post-editing (PE), for those wishing to pursue research or to work in the language industry. Given the close collaboration of industry and academia in gathering data in this new field of research, Ana Guerberof Arenas considers a strict separation of the two perspectives to be a little artificial. A large and growing body of research on MT PE in particular has helped researchers and the industry alike gain important insights into how MT technology has been impacting on translators’ work and workplaces. It has above all demonstrated that data should be analysed and interpreted in their appropriate context in order to avoid unwarranted generalizations. Building on the clear lines and rigorous methods that have been established in the last ten to fifteen years, research interest is likely to grow with the rapid advances being made in artificial intelligence. Guerberof Arenas stresses that, as MT technology changes, further work will be needed to investigate its effects on PE effort and to assess whether PE will be necessary at all for certain products or purposes, although current evidence suggests that achieving human quality translations continues to require human intervention. Michael Carl and Emmanuel Planas close out the contributions to this volume with a chapter that traces the development of interactive machine translation (IMT) technology, outlines its current utilization and suggests how it might be leveraged in the future. Through optimized machine learning, as shaped by feedback obtained from real-time human translator input, they see

12

The Bloomsbury Companion to Language Industry Studies

IMT as potentially bringing about a radical change to translation workflows. Carl and Planas envisage interactive non-coherent cloud-based translation as a new workflow in which content is deliberately fragmented, non-linear and decontextualized, and made accessible to large teams of translators for collaboration in the cloud. The authors stress the importance of empirical research on both quality and cognitive effort as metrics for gauging the efficacy of interactive machine-translation technology vis-à-vis other approaches. The final section of the volume is a glossary of key concepts and terms that might not all be familiar to newcomers to language industry studies. Among practitioners, researchers and practitioner-researchers, a common understanding of this new field is still emerging or in the process of consolidating. Although we have called this glossary an A–Z, not all of the letters of the alphabet have entries (yet). We anticipate that we will soon see the list of key concepts grow to accommodate new knowledge as more industry stakeholders, practitioners and academics work together in transdisciplinary projects in the exciting and rapidly expanding language industry. The contributors to this volume have provided a great deal of insight into how this might happen.

Notes 1 An overview of GILT: https://www.gala-global.org/industry/introduction-languageservices 2 U.S. Department of Labor Occupational Outlook Handbook: https://www.bls.gov/ ooh/media-and-communication/interpreters-and-translators.htm#tab-1 3 A model of augmented translation: http://www.commonsenseadvisory.com/ machine_translation.aspx 4 ISO 17100: https://www.iso.org/standard/59149.html 5 TAUS Quality Dashboard: https://www.taus.net/quality-dashboard-lp 6 Overview of barrier-free communication: https://www.zhaw.ch/en/linguistics/ research/barrier-free-communication/#c42710 7 Web Content Accessibility Guidelines 2.0: https://www.w3.org/TR/WCAG20/ 8 US Electronic and Information Accessibility Standards: https://www.access-board. gov/guidelines-and-standards/communications-and-it/about-the-ict-refresh/finalrule/i-executive-summary 9 Translators Without Borders: https://translatorswithoutborders.org/ 10 TWB 2017 annual report: https://translatorswithoutborders.org/annualreport/2017/

Introduction

13

References Bond, E. (2018a), ‘150  New UK Language Service Providers So Far in 2018 Take Total to Nearly 2,000’, Slator News. Available online: https://slator.com/features/150-newuk-language-service-providers-so-far-in-2018-take-total-to-nearly-2000/ (accessed 19 July 2018). Bond, E. (2018b), ‘The Stunning Variety of Job Titles in the Language Industry’, Slator News. Available online: https://slator.com/features/the-stunning-variety-of-jobtitles-in-the-language-industry/ (accessed 23 July 2018). Ehrensberger-Dow, M. and S. O’Brien (2015), ‘Ergonomics of the Translation Workplace: Potential for Cognitive Friction’, Translation Spaces, 4 (1): 98–118. Fantinuoli, C. (2017), ‘Computer-assisted Interpreting: Challenges and Future Perspectives’, in I. Duran Muñoz and G. Corpas Pastor (eds), Trends in E-Tools and Resources for Translators and Interpreters, 153–74, Brill: Leiden. Federici, F., ed. (2016), Mediating Emergencies and Conflicts: Frontline Translating and Interpreting, Houndmills/Basinstoke: Palgrave MacMillan. Greco, G. M. (2018), ‘The Nature of Accessibility Studies’, Journal of Audiovisual Translation, 1 (1): 205–32. Lewis-Kraus, G. (2016), ‘The Great A.I. Awakening’, New York Times, 14 December. Available online: https://www.nytimes.com/2016/12/14/magazine/the-great-aiawakening.html O’Brien, S., F. Federici, P. Cadwell, J. Marlowe and B. Gerber (2018), ‘Language Translation during Disaster: A Comparative Analysis of Five National Approaches’, International Journal of Disaster Risk Reduction, 31: 627–36. Pielmeier, H. (2017), ‘The Journey to Project Management Automation’, Common Sense Advisory Blog. Available online: http://www.commonsenseadvisory.com/ Default.aspx?Contenttype=ArticleDetAD&tabID=63&Aid=48489&moduleId=390 (accessed 11 July 2018). Pym, A. (2013), ‘Translation Skill-Sets in a Machine Translation Age’, Meta, 58 (3): 487–503. Romero-Fresco, P. (2018), ‘In Support of a Wide Notion of Media Accessibility: Access to Content and Access to Creation’, Journal of Audiovisual Translation, 1: 187–204. Sargent, B. (2018), ‘Whither Now, TMS?’, Common Sense Advisory Blog. Available online: http://www.commonsenseadvisory.com/Default.aspx?Contenttype=ArticleD etAD&tabID=63&Aid=48574&moduleId=390 (accessed 29 July 2018).

14

2

Core research questions and methods Christopher D. Mellinger

1.  Planning research and generating research questions Recent scholarship related to the language industry demonstrates a considerable range of research questions, methodologies and lines of investigation. One need not look further than the contributions to the present volume to understand its expanding scope of activities, processes and stakeholders. Previous descriptions of the language industry, such as Sager’s (1994) mapping, which focuses largely on machine translation and natural language processing, only partially describe our current understanding of the industry’s composition. Dunne and Dunne (2011), for instance, outline the various translation-related activities that comprise the field, adding project management to the more typically cited tasks of translation, localization, revision and terminology management. Additionally, the industry encompasses both signed and spoken-language interpreting and constitutes a diverse business landscape that extends beyond multilingual document and content lifecycles. Many of these topics are researched by a range of industry stakeholders and, consequently, can be found in industry-generated reports, white papers and online publications as well as academic journals and volumes. While the breadth of inquiry is considerable, research related to the various aspects of the language industry is undertaken in much the same manner as any other social scientific discipline. Scholarly investigation requires thoughtful consideration of the object of study and careful planning to allow for appropriate data analysis and interpretation. Several recent volumes in translation and interpreting studies have described how research questions and hypotheses are developed as well as how independent and dependent variables are operationalized in research studies (e.g. Saldanha and O’Brien 2013; Angelelli and Baer 2016; Mellinger and Hanson 2017). Moreover, these volumes advocate

16

The Bloomsbury Companion to Language Industry Studies

for research questions driving the selection of appropriate methodology to collect the necessary data for subsequent analysis. Sound research design and analysis is critical when investigating any aspect of the language industry. However, researchers cannot approach inquiry into the language industry without taking into account several unique aspects of this type of investigation, including (1) the potential need to engage industry stakeholders; (2) the transdisciplinary nature of language industry research; and (3) the fragmented and ill-defined nature of the industry itself. While this list is not exhaustive, these three aspects ought to be considered during the initial phases of a project, particularly since they may constrain and influence the implementation of the chosen research methodology. Potential engagement with industry stakeholders is an important consideration when planning research and generating research questions. Research on language service providers as well as national, supranational and professional organizations is often impossible or limited in scope without researchers establishing partnerships with these entities. When initially developing research questions and hypotheses, investigators should determine the extent to which access to the necessary stakeholders is possible, what relationships need to be cultivated and maintained with gatekeepers and how this may impact progress towards the project’s goals. These collaborations can be fruitful, as evidenced by a number of studies involving EU institutions (e.g. Koskinen 2008; Pym et al. 2011), translation agencies (e.g. Ehrensberger-Dow and Hunziker-Heeb 2016; Ozolins 2007; Risku, Pichler and Wieser 2017), and hospitals and courts (e.g. Angelelli 2004a, 2004b; Berk-Seligson 1990/2002; Mason 2008). Language industry research can also benefit from transdisciplinarity and research collaboration. Scholars have reflected on intersections with disciplines adjacent to translation and interpreting studies, highlighting the multidisciplinary nature of the fields at both the conceptual and methodological levels (Steiner 2012; Gambier and van Doorslaer 2016). For instance, audiovisual translation scholars have drawn on film and cultural studies (e.g. Di Giovanni and Gambier 2018), localization researchers have drawn on scholarship from international business and marketing (Jiménez-Crespo and Singh 2016), and scholars of machine translation and translation informatics often integrate findings from computer science, information science and natural language processing (Bowker and Delsey 2016; Carl, Bangalore and Schaeffer 2016). This cross-fertilization can prove useful in language industry settings and provide multiple perspectives on the objects of analysis. Both product- and process-oriented studies in translation and interpreting have already demonstrated that multiple approaches to analysis

Core Research Questions and Methods

17

can deepen our understanding of these tasks. Rodríguez Vázquez and O’Brien (2017), for instance, examine the intersection of web accessibility and localization, which are two areas that are often researched in isolation. Finally, research of the industry at the macro level can require different methodological and conceptual frameworks that can benefit from business, economic or financial perspectives. Inter- and transdisciplinary work is not without its challenges, particularly when generating research questions. While researchers can draw on different concepts and methodologies from a variety of disciplines, they must be careful to operationalize the variables in line with the adopted theoretical framework. Muñoz (2016) indicates the importance of grounding research within a theoretical framework to make it possible to ascribe meaning to data and study results. In a similar vein, Marín García (2017) argues that constructs adopted to conduct translation process research ought to be examined for hidden assumptions and potential inconsistencies. Judicious review of concepts drawn from neighbouring disciplines helps improve the interpretation of results as well as their application. This approach could be extended to any aspect of research involving the language industry, insofar as rigorous scrutiny of each of the variables in the study will help focus research questions and hypotheses. The fragmented and ill-defined nature of the language industry must also be considered when developing research projects. The composition of the language industry often varies in reported studies, particularly with respect to the language products and services included in any reported measurement of the field. Researchers ought to delineate what translation-related activities comprise the industry in the context of their studies in order to help improve comparability of results. Failure to do so may provide a skewed perspective on a variety of industry descriptions, such as size, activity, scope and value. For instance, studies of the industry that rely on self-reported data, such as those conducted by the research firm Common Sense Advisory, need to allow for differentiation between companies that have TI services as a core business focus and those that provide such services among a range of other activities. Likewise, industry-level measures might adopt Dunne and Dunne’s (2011) approach of utilizing standard industrial classification codes and their descriptions. In the United States, NAICS Code 541930 covers translation and interpretation services, defined as companies primarily engaged in translation and interpreting activities, including translation, editing, localization and project management.1 However, these classifications still do not account for TI activity in non-governmental organizations (NGOs) or as part of governmental and supranational activities. Moreover, activities specific to audiovisual

18

The Bloomsbury Companion to Language Industry Studies

translation (i.e. dubbing and subtitling) are classified separately from TI, as are software development and training activities, which ostensibly also form part of the language industry. Similar issues exist in international standards and classification schemes, such as ISO 17100 (2015) or DIN 2347 (2017), that aim to define the scope of work for translation and interpreting services. Researchers must be similarly cognizant of the ill-defined nature of the language industry when conducting research on specific aspects or areas within the industry. As an example, researchers interested in the translation process at an organizational level may raise questions related to content transformation, the multilingual document lifecycle or implementation and use of translation and authoring software at an institutional level. This scope differs considerably from more participant-based research that examines the same concepts within the scope of a specific language company, translator or interpreter. Likewise, questions related to translation assessment, interpreter performance or quality assurance and control can occur at both macro and micro levels. For instance, large organizations and certifying bodies might research quality and assessment as it relates to workflows or processes, while micro-level studies may be more interested in specific textual features or extratextual variables that influence the quality or performance of practitioners. While the constructs may be operationalized in a similar manner, the context in which they may be explored will require consideration during the development of specific research questions and hypotheses.

2.  Research methods in translation and interpreting Given the nature of the language industry and the range of objects of investigation, the research methods used in language industry research vary considerably. The methods used, however, can be classified much in the same manner as any other discipline – as experimental, quasi-experimental and observational. These studies are conducted by a range of industry stakeholders, which can include academic institutions, think tanks, government and professional agencies, and public and private companies. In addition, the speed at which industrygenerated research is conducted often requires different dissemination methods than more traditional academic outlets. Online research repositories such as arXiv or PsyArXiv allow investigators to provide preprints of scientific papers for dissemination, with researchers in fields including computer science and psychology already taking advantage of these forums to share results. These

Core Research Questions and Methods

19

archives are not without shortcomings, particularly with respect to lack of peer review and copyright issues; however, they allow researchers to share their findings much more quickly and widely than other publication avenues. Similarly, presentations at industry events and transdisciplinary conferences also provide timely outlets to share work among researchers, industry professionals and thought leaders (Jemielity 2018).

2.1. Experimental For research to be classified as experimental, it must exhibit two characteristics: experimental control of the independent variable(s) and random assignment. This type of research allows scholars to build a case for causality between two variables – that is, an independent variable has caused a specific outcome or change in one or more dependent variables. In language industry research, the ability to determine causality can be particularly important when attempting to describe how external influences might impact the task of language professionals or how specific interventions might improve workflows, project management practices or technology use. Industry-level variables are outside the control of researchers, but independent variables related to specific tasks within the field – such as those related to human–computer interaction, ergonomics and translation and interpreting sub-processes – can be manipulated by researchers. For instance, Jensen (1999) imposes specific time limits on an experimental task to determine whether time pressure has an impact on problem-solving activities of translators. Furthermore, experimental research control extends to considerations of a controlled environment to mitigate potential confounds and better isolate the phenomenon under investigation. However, in the context of language industry research, the number of studies that are able to satisfy both of these requirements (i.e. manipulation of the independent variable and randomization) is limited given the challenges inherent in achieving true randomization. Research involving human participants that compares translators or interpreters to a control group, for instance, cannot randomize participants into groups given the participant variable – namely their profession – that constrains their classification to a specific group (Gile 2016). Scholars interested in how specific variables influence the translation and interpreting task have been able to randomize a group of translators into specific treatment groups, thereby achieving random assignment of participants. The importance of randomization for truly experimental research is described by Cairns (2016).

20

The Bloomsbury Companion to Language Industry Studies

Of the research in translation and interpreting studies that meets both requirements to be considered experimental research, quite a few can be classified under the umbrella term of ‘translation process research’. For instance, Rojo and Ramos (2016) investigated the impact of feedback on translator performance. The groups in the study were not differentiated by a specific participant variable; rather they were all students that were randomly assigned into one of two treatment groups. The researchers were able to assign the specific tasks and control the stimuli in order to measure the impact that the feedback had on several dependent variables related to translation performance. Other studies that meet the experimental control and randomization requirements include Gerver (1976), Dragsted and Hansen (2008) and Christoffels and de Groot (2004). Industry stakeholders also tend to conduct experimental research when developing new software or algorithms used in translation technologies or machine translation. This type of research is often exploratory as researchers test how newly developed systems compare with already established metrics and available systems. For instance, Libovický, Brovelli Meyer and Cartoni (2018) adapt existing machine-translation quality metrics to improve evaluation of MT output at the paragraph level. Traditional quality metrics, such as METEOR and BLEU scores, serves as a baseline against which the new metrics can be compared. In a similar vein, Hyman (2014) presents an overview of several research efforts of Google and Microsoft in their work on speech-to-speech translation. The exploratory work conducted by research and development teams at these companies allows new products and services to be developed, which ultimately have a direct impact on language industry stakeholders as these products, and the technologies on which they are built, become available.

2.2. Quasi-experimental Truly experimental studies have been rather rare in the field to date; however, scholars have used quasi-experimental research to considerable effect in language industry contexts to examine the influence of specific independent variables on different participant groups. For instance, studies in ergonomics have examined tasks embedded in specific environments as well as the usability of specific tools in the service of translation and interpreting professionals (Ehrensberger-Dow 2017; see also Harvey et al. 2011 for an overview of cognitive ergonomics). In interpreting studies, participant variables related to professional experience and expertise have been explored in the context of specific cognitive processes, such

Core Research Questions and Methods

21

as working memory and articulatory suppression (e.g. Liu, Schallert and Carroll 2004). These variables may help inform the work of professional interpreters during their daily work and inform industry practices. Taken in isolation, quasi-experimental studies are only able to draw tentative conclusions about causality between the independent and dependent variables. Scholars are somewhat limited in their ability to generalize to the larger population since the observed changes might be specific to the research context. In addition, several issues arise that may make it difficult to determine the relationship between variables. For example, self-selection to participate in a study introduces a potential bias on the part of participants who, for reasons unknown to the researcher, want to participate in the study. Likewise, omitted or unknown variables that fall outside of the control of the researcher can potentially alter results. A third potential confound is simultaneity bias, in which the dependent and independent variable are interconnected to an extent that they cannot be separated. These three issues (i.e. self-selection, omitted variables and simultaneity) comprise the bulk of endogeneity bias, a term of art among statisticians and econometricians (see Wooldridge 2015 for more on this topic). When investigating the language industry, researchers must be aware of these challenges to draw appropriate conclusions with respect to causality and not overstate claims that could potentially be the result of one of these issues. In an effort to address these concerns, replication of research designs and methodologies that examine different populations may help establish greater confidence in the findings. For instance, many of these studies related to interpreting and working memory have been qualitatively synthesized by Dong and Cai (2015) and Köpke and Signorelli (2012). In studies that employ null hypothesis testing, in which statistical measures are used to determine whether a statistically significant difference can be observed between two or more groups, full reporting of all results (significant or otherwise) and effect sizes is paramount to ensure transparency and a complete view of the study (see also Mellinger and Hanson 2017). Moreover, there is an ethical imperative to provide a complete methodological description and full statistical results, because such reporting allows future work to synthesize the literature, both qualitatively and quantitatively through systematic reviews and meta-analyses.

2.3. Observational Studies that do not directly manipulate the independent variable or randomize participants into specific groups are considered observational research, which

22

The Bloomsbury Companion to Language Industry Studies

includes both qualitative and quantitative research methods. Various approaches that are observational in nature have been employed to investigate the language industry, such as ethnographic research (e.g. Asare 2011; Koskinen 2008), survey-based research (Katan 2009; Zwischenberger 2017) and field observations (e.g. Wadensjö 1998; Angelelli 2004a, 2004b). These studies regularly figure into the landscape of language industry research and provide both quantitative and qualitative data on practitioners in the field, as well as behaviours, processes and beliefs. Baraldi and Mellinger (2016) present an overview of observational research within translation and interpreting studies and many of the cited studies (e.g. Angelelli 2004b; Ehrensberger-Dow 2014) are related to aspects of the language industry. Of particular concern when conducting observational research is the position of the researcher relative to that of the object of investigation. This positioning is not necessarily physical location or proximal distance – for instance, Mellinger (2015) reflects on the applicability of internet-mediated research to conduct process-oriented studies – but rather the relationship and influence that the researcher may have on the context in which the study is being conducted. For example, researchers often aim to be external to the study itself in experimental and quasi-experimental settings, whereas observational research may result in participant-researcher interaction. Leblanc (2013) is an example of this type of interaction, in which the researcher conducts ethnographic research on translation services and agencies. Technological advances have allowed researchers to rely on multiple approaches to observational research to triangulate multiple data sources and alter the way in which data may need to be collected (see Alvstad, Hild and Tiselius 2011). However, participatory research in specific contexts may, in fact, be appropriate. Wurm and Napier (2017), for instance, describe participatory research in the context of interpreting studies as being collaborative and reliant on relationships and trust. This approach, while not suitable in every case, complements current observational methods.

3.  Data types, analysis and interpretation Data derived from experimental, quasi-experimental and observational research studies can be divided into two overarching categories: qualitative and quantitative data. The first type of data, qualitative data, refers to a range of nonnumerical data that can be derived from many sources, including texts, audio, video and observational field notes. The second type of data, quantitative data,

Core Research Questions and Methods

23

refers to numeric data that is derived from a predetermined unit of analysis. Quantitative data can be further divided into different levels of measurement: nominal or categorical data, ordinal data, interval data and ratio data (see Stevens 1946 for details). For data to be representative of a specific construct and meaningful during analysis, researchers interested in the language industry must carefully operationalize the concepts and constructs under investigation prior to data collection. The alignment of research question and hypothesis with research methodologies and data collection methods are crucial for study results to be meaningful. Each data type presents advantages, but considerable care should be taken when deciding what type of data should be collected, as well as when analysing data and interpreting results in order to avoid over-generalization of claims or misrepresentation of what these data can demonstrate. Research in the language industry benefits from the use of both types of data in mixed-methods approaches to research. In addition, researchers have espoused the benefits of methodological triangulation resulting in multiple data sets to provide a more comprehensive view of objects of study (cf. Shreve and Angelone 2010). As noted at the outset, these two data types have been treated in translation and interpreting studies literature by several researchers. Saldanha and O’Brien (2013) present an overview of the types of data derived from process-oriented, product-oriented and participant-oriented studies, while Angelelli and Baer (2016) bring together a collection of scholars working with various research methodologies who describe the broad range of data available to translation and interpreting researchers. Additionally, O’Brien (2010) outlines the challenges of working with eye-tracking data collection methods with respect to the proliferation of data available to scholars. Similar methodological reflection appears in Malamatidou (2018) related to triangulation of corpus data in order to address challenges in handling translation-specific corpora. One specific data collection method that can elicit both quantitative and qualitative data and that has been employed in a number of language industry contexts is survey-based research. Well-designed survey instruments can investigate underlying constructs, such as interpreter perceptions of visibility (e.g. Angelelli 2004a), job satisfaction (e.g. Lee 2017) or roles (Katan 2009; Zwischenberger 2017). Likewise, researchers can adopt or adapt previously developed instruments in other disciplines to interrogate questions specific to translation and interpreting. Mellinger and Hanson (2018), for instance, employ a scale used to measure communication apprehension (PRCA-24; McCroskey 1982) to investigate inherent participant characteristics with their propensity

24

The Bloomsbury Companion to Language Industry Studies

to adopt technology and their attitudes towards technology use in specific interpreting contexts. Qualitative survey designs, such as Pokorn (2017) in her study on spatial positioning of healthcare interpreters, provide researchers with insight into the rationale for specific responses or opinions. Open-ended questions can help corroborate quantitative or descriptive measures and contextualize findings. Moreover, these types of questions allow researchers to generate new questions to explore in later studies. While data collection is a critical step in conducting research in the language industry, its subsequent analysis is of equal importance. Many researchers working with quantitative data are familiar with null hypothesis testing which allows statistical analysis to test whether differences or relationships exist between sets of data. Mellinger and Hanson (2017) provide arguments for complete reporting of the results of any statistical test, including exact p-values and test statistics, confidence intervals and effect sizes to aid in the interpretation of the results. In light of the challenges with access to authentic data sources in the language industry, it is essential to provide a complete account of any statistical testing. Researchers working in the language industry must also consider the manner in which qualitative data will be analysed. In some instances, researchers working with qualitative data – such as interview transcripts, audio and video files and unstructured text and content – may wish to identify themes and patterns that emerge from the collected data set. There are several ways in which researchers typically proceed, such as using grounded theory (Glaser and Strauss 1967) or actor-network theory (Latour 2005). In some cases, scholars may use focus groups or interviews to elicit data from a smaller subset of the population in order to develop more formal research questions that can be tested empirically. Likewise, ethnographic research allows investigators to observe a particular context or group of participants in specific environments, which in turn can reveal specific patterns of behaviour or allow explicit description of their work (e.g. Leblanc 2013). Autoethnographic research, while not widely used in language industry research to date, provides an avenue by which the researcher can reflect on their own practice in the industry. All of these observational techniques require explicit recognition of the position and potential bias of the observer in order to establish credibility and contextualize the results. Another approach to qualitative data analysis is content analysis. Krippendorff (2012) presents analytic and evaluative techniques for conducting content analysis research and presents the merits of both inductive and deductive approaches to pattern and theme recognition. Krippendorff (2004, 2012) also

Core Research Questions and Methods

25

describes how multiple evaluators or raters can code the data as it relates to the specific concepts or constructs under investigation and debunks several misconceptions related to content analysis. This type of categorization allows scholars to make quantitative comparisons of qualitative data (e.g. Oakes and Ji 2012). However, researchers must resist the temptation solely to count and compare the raw number of items found in specific data categories, because such simplistic comparisons lack statistical rigour. In addition, this count data must then be statistically tested to mitigate the possibility of sampling error.

4.  Ethical considerations All researchers should be aware of ethical considerations when conducting a study. In many instances, research involving human participants requires approval by an ethics review committee or institutional review board. Israel (2015) notes that there is considerable variation regarding what constitutes ethical oversight in different countries; therefore, it is important that researchers are familiar with local and regional practices to ensure compliance with these regulations. Common to many contexts is the informed consent of participants involved in the study, as well as equitable distribution of any benefits that may be obtained from participation in the study. In addition, risk should be minimized, which may be possible through careful research design and methodology and considerations of confidentiality and anonymity. Research regarding the language industry, however, may involve additional considerations beyond those related to human participants. For instance, when working with translation or interpreting agencies or translation technology companies, there may be a need for non-disclosure agreements to protect industry partners and any professional or trade secrets that are under scrutiny during a study. Researchers may need to determine whether confidentiality and/ or anonymity have been sufficiently protected in order to avoid damaging the professional reputation of specific translators or interpreters who work in the field. In many cases, researchers ought to use pseudonyms or numeric codes to safeguard participant data throughout the research process up to and including dissemination of results. An emerging area of ethical consideration in translation and interpreting studies, and particularly regarding language industry studies, is data management and sharing. Technological advances have now made it possible to store data indefinitely locally and in online data repositories. The benefits

26

The Bloomsbury Companion to Language Industry Studies

are perhaps obvious, such as greater transparency with respect to data analysis and the potential aggregation of larger data sets. Likewise, publishing venues may allow for supplemental files and data to be presented alongside published studies to allow the results to be scrutinized by the larger research community. In other cases, industry partners may be willing to provide access to hard-to-reach participants, proprietary software or data, or trade secrets in exchange for access to collected data beyond what is reported in academic reports or publications. Yet the informed consent process for participants does not always account for this technologization and shifting industry landscape, and researchers ought to consider from the initial stages of research whether data will be shared in this manner. In doing so, researchers can obtain informed consent from participants with respect to this type of data sharing and allow participants to opt out, if warranted. Ethical considerations, however, are not limited to the data collection and dissemination phases of research but extend to data analysis. Panter and Sterba’s (2011) edited collection presents a wide range of issues that ought to be considered in the initial design, analysis and interpretation of quantitative data. Researchers must be cognizant to use appropriate analytical methods and may even wish to collaborate with quantitative methodologists to help ensure appropriate data handling. The growing use of big data across a variety of disciplines (Borgman 2016) may also necessitate increased integration of statisticians and methodologists into language industry research projects. Organizations such as the Committee on Publication Ethics (COPE) provide guidance on many of these questions and emphasize the importance of disclosures in research presentations and publications. Maintaining ethical practices throughout the research process from data collection to analysis and dissemination of findings is important not only from a philosophical standpoint but also from a practical perspective. If researchers wish to investigate aspects of the language industry and engage industry professionals in participant-oriented or process-oriented studies, then researchers must instil trust and display academic integrity to assuage (perhaps unfounded) concerns related to these academic endeavours. Furthermore, the dissemination of research to study participants not only complies with many IRB regulations but also helps develop buy-in for participation in future work. Research is likely to be of mutual benefit to scholars and practitioners and finding ways to engage the language industry community at all stages of research can only help improve research endeavours.

Core Research Questions and Methods

27

5.  Research challenges Research involving the language industry is not without its challenges. As noted previously, the language industry lacks a common definition making it difficult to identify which tasks, processes or activities have been or ought to be included in studies. National economic designations or organizational definitions (e.g. GALA and ELIA) may serve as a starting point; however, this simple adoption may not be wholly satisfactory. To help evaluate research in the field, researchers should report how the language industry has been defined in their particular study. Other features of the language industry complicate the ability to identify and measure the language industry. For instance, the often-cited dichotomy of independent subcontractors and in-house language services remains part of the discussion surrounding industry research. Moreover, newswires, data aggregators and media companies such as Bloomberg often handle portions of their translation and localization work in-house, which requires reflection on how these companies that are not considered language service providers figure into language industry landscape. With translation and interpreting occurring in a range of companies and organizations, it becomes increasingly difficult to aggregate language services into a single industry and measure its scope and size. This issue is further compounded by the nature of academic research, with investigators in different areas being housed in various academic units or schools. As the present volume can attest, the scope of inquiry is considerable and expanding; in many cases, specialization in industry subareas is needed to conduct research. The various methodological approaches, and their associated challenges, make broad application to the industry as a whole a problematic endeavour. As O’Brien (2013) notes, translation and interpreting studies have relied on borrowing from outside the discipline to explore cognitive translation processes. This practice can be observed outside of translation process research in language industry research as well. A notable example of such collaboration has been the importation of machine learning algorithms and data science techniques to improve machine translation and speech-to-speech translation. Consequently, researchers working in the field should remain mindful of the fragmentary nature of the field and seek ways to adapt and develop appropriate research methodologies and partnerships. Data access is another obstacle faced by researchers when conducting research on the language industry. Access to informative data sources is critical to obtain meaningful and generalizable results. However, specific language

28

The Bloomsbury Companion to Language Industry Studies

industry contexts, such as highly regulated fields or sensitive environments, may hinder researchers’ attempts to access specific populations working in these areas. To provide an example, research on medical interpreting often requires ethical committee approvals to help safeguard patient information that is legally protected, and researchers must carefully outline how data will be collected and managed to ensure confidentiality. Proprietary data access may also be heavily restricted by companies or agencies, given the risk involved should any of these data be revealed. Moreover, legal and regulatory restrictions may limit access to or use of specific information. For instance, the General Data Protection Regulation (GDPR 2016) that took effect from 25 May 2018 in the European Union (EU) entitles individuals to specific rights regarding personal data collected and stored by companies or entities subject to its provisions. Scholars have begun to reflect on the considerable impact that these stipulations have on research and the implications for data security and consent (e.g. Politou, Alepis and Patsakis 2018; Mourby et al. 2018). To mitigate for potential challenges surrounding data management, researchers ought to be mindful of national and supranational legal requirements regarding the collection, storage and use of research data when developing their studies. Studies on the scope and size of the industry face additional data access challenges since not all companies are required to report financial data in the same way from country to country, if at all. A third challenge with which researchers must contend when conducting language industry research is related to the comparability of results. Scholars must recognize that individual studies are conducted with constraints that may ultimately limit their ability to replicate previous studies using the same research design, variable controls or data sources. For example, studies on translators may vary based on language pairs, directionality or the work environment of the participants. Since these participant pools are limited in size, exact replication of a study can be expensive or difficult. As such, comparisons of similar studies must take into account these differences. In survey research, this issue is further compounded when studies operationalize concepts differently. The design and validation of survey instruments are important contributions for advancing the field and helping build consensus among researchers regarding theories, definitions and data collection. Despite the challenges to language industry research, there are a number of potential solutions available to overcome these obstacles. Perhaps most notable are the benefits inherent in research collaborations between academics and industry stakeholders. These cross-sector partnerships can provide researchers with greater access to participants and data, while providing

Core Research Questions and Methods

29

rigorous scholarship to improve industry practices. In many instances, researchers suggest motivations related to improvements in translator and interpreter performance, refinement of industry standards and enhancements to technologies or systems. To achieve these goals, researchers can partner with industry stakeholders to directly inform the parties who would benefit from such research. With these partnerships, researchers ought to be mindful of the potential for conflicts of interest arising as a result of access to proprietary information. Moreover, researchers should also be aware of their position as observers and refrain from imposing outside values or opinions while documenting behaviours or ideas. Should these conflicts of interests arise, they ought to be divulged when reporting results in order to maintain academic integrity throughout the research project. Furthermore, scholars working on language industry research may find it useful to disseminate research beyond the academic community in an effort to foster greater engagement of industry and the professional community with universities. By establishing a working relationship with the participants who are involved in research studies, researchers can develop close ties and build trust among industry stakeholders in order to gain greater access to data, participants and materials. Moreover, international organizations such as AIIC, GALA and ELIA have academic agreements and forums that provide additional avenues to collaborate and share research by means of workshops, trainings and symposia. These partnerships can also help inform industry-oriented pedagogy (e.g. Greere 2012; see also Kearns 2008). Scholarship across multiple sectors would benefit from this type of engagement, with interdisciplinary teams often being better equipped to address the multifaceted nature of industry research. Scholarship involving the language industry needs greater methodological reflection and development. As noted above, the field needs additional instrument creation and validation to improve comparability of results and refinement of constructs in the field. This methodological work is a service to the discipline, and when conducted following best practices, mirrors scholarship in adjacent disciplines. In a similar vein, full and partial replication studies are needed to improve the overall robustness of findings that have been published to date in order to lend credence to previous findings or to call for additional research on similar topics. Greater recognition of the importance of replication studies and the ability to publish non-significant results may require a shift in perspective on the part of the larger translation and interpreting studies community. The various challenges in this section also represent opportunities for future directions in research. For instance, social and digital media are two data

30

The Bloomsbury Companion to Language Industry Studies

sources that remain largely untapped with respect to the language industry. These data streams may serve as potential indicators of market performance, job satisfaction or brand awareness and visibility. Moreover, research involving newer data sources and techniques could be compared to previous results in order to corroborate or refute findings. The increased prevalence of big data analytical techniques drawn from data science might also allow language industry researchers to examine user behaviour or to leverage predictive analytics to understand how language resources are used. Data visualization tools are another means by which researchers could map networks among industry partners and professionals. While not exhaustive, these opportunities represent some of the many ways that researchers might address challenges in language industry research.

6.  Concluding remarks Ultimately, research methods and questions that are currently employed represent a snapshot of the language industry. As inquiry regarding the language industry continues to evolve, there will likely be a greater number of research projects that cut across the various subareas represented in this volume. In addition to those previously mentioned interdisciplinary studies, one could add work by O’Hagan and Mangiron (2013), who describe the intersection of accessibility studies and game localization. Moreover, the contributors to Jacko’s (2013) edited collection on human–computer interaction represent the considerable range of methodological possibilities – which span the fields of psychology, information systems, computer science, engineering and kinetics – that can be applied to technological aspects of the language industry. In sum, the research questions and methods presented here along with the various areas of inquiry covered in this volume provide an overview of the current situation. Researchers will need to continue reflecting on these ideas as the field matures and evolves.

Note 1 The NAICS codes are specific to North America and are available here: www.census. gov/eos/www/naics/. Other classifications exist, such as the International Standard Industrial Classification, however ISIC 7490 encompasses a range of professional, scientific and technical activities that extend beyond TI activities.

Core Research Questions and Methods

31

References Alvstad, C., A. Hild and E. Tiselius, eds (2011), Methods and Strategies of Process Research: Integrative Approaches to Translation Studies, Amsterdam: John Benjamins. Angelelli, C. V. (2004a), Revisiting the Interpreter’s Role: A Study of Conference, Court and Medical Interpreters in Canada, Mexico, and the United States, Amsterdam: John Benjamins. Angelelli, C. V. (2004b), Medical Interpreting and Cross-Cultural Communication, New York: Cambridge University Press. Angelelli, C. V. and B. J. Baer, eds (2016), Researching Translation and Interpreting, New York: Routledge. Asare, E. K. (2011), An Ethnographic Study of the Use of Translation Tools in a Translation Agency: Implications for Translation Tool Design, Doctoral diss., Kent State University. Baraldi, C. and C. D. Mellinger (2016), ‘Observations’, in C. Angelelli and B. J. Baer (eds), Researching Translation and Interpreting, 257–68, New York: Routledge. Berk-Seligson, S. (1990/2002), The Bilingual Courtroom: Court Interpreters in the Judicial Process, Chicago: University of Chicago Press. Borgman, C. L. (2016), Big Data, Little Data, No Data, Cambridge, MA: MIT Press. Bowker, L. and T. Delsey (2016), ‘Information Science, Terminology and Translation Studies: Adaptation, Collaboration, Integration’, in Y. Gambier and L. van Doorslaer (eds), Border Crossings: Translation Studies and Other Disciplines, 73–96, Amsterdam: John Benjamins. Cairns, P. (2016), ‘Experimental Methods in Human-Computer Interaction’, in M. Soedergaard and R. F. Dam (eds), The Encyclopedia of Human-Computer Interaction, second edition. Available online: https://www.interaction-design.org/literature/book/ the-encyclopedia-of-human-computer-interaction-2nd-ed/experimental-methodsin-human-computer-interaction (accessed 7 September 2017). Carl, M., S. Bangalore and M. Schaeffer (2016), ‘Computational Linguistics and Translation Studies: Methods and Models’, in Y. Gambier and L. van Doorslaer (eds), Border Crossings: Translation Studies and Other Disciplines, 225–44, Amsterdam: John Benjamins. Christoffels, I. K. and A. M. B. de Groot (2004), ‘Components of Simultaneous Interpreting: Comparing Interpreting with Shadowing and Paraphrasing’, Bilingualism: Language and Cognition, 7 (3): 227–40. Di Giovanni, E. and Y. Gambier, eds (2018), Reception Studies and Audiovisual Translation, Amsterdam: John Benjamins. DIN 2347 (2017), Translation and Interpreting Services – Interpreting Services – Conference Interpreting, Berlin: DIN. Dong, Y. and R. Cai (2015), ‘Working Memory and Interpreting: A Commentary on Theoretical Models’, in Z. Wen, M. B. Mota and A. McNeill (eds), Working Memory in Second Language Acquisition and Processing, 63–81, Toronto: Multilingual Matters.

32

The Bloomsbury Companion to Language Industry Studies

Dragsted, B. and I. G. Hansen (2008), ‘Comprehension and Production in Translation: A Pilot Study on Segmentation and the Coordination of Reading and Writing Processes’, in S. Göpferich, A. L. Jakobsen and I. M. Mees (eds), Looking at Eyes: Eye-Tracking Studies of Reading and Translation Processing, 9–29, Copenhagen: Samfundslitteratur. Dunne, K. J. and E. S. Dunne (2011), ‘Mapping Terra Incognita: Project Management in the Discipline of Translation Studies’, in K. J. Dunne and E. S. Dunne (eds), Translation and Localization Project Management, 1–14, Amsterdam: John Benjamins. Ehrensberger-Dow, M. (2014), ‘Challenges of Translation Process Research at the Workplace’, Mon TI 1 (1): 355–83. Ehrensberger-Dow, M. (2017), ‘An Ergonomic Perspective of Translation’, in J. Schwieter and A. Ferreira (eds), The Handbook of Translation and Cognition, 332–49, Malden, MA: Wiley Blackwell. Ehrensberger-Dow, M. and A. Hunziker Heeb (2016), ‘Investigating the Ergonomics of the Technologized Translation Workplace’, in R. Muñoz Martín (ed), Reembedding Translation Process Research, 69–88, Amsterdam: John Benjamins. Gambier, Y. and L. van Doorslaer (2016), Border Crossings: Translation Studies and Other Disciplines, Amsterdam: John Benjamins. GDPR (2016), Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data, and repealing Directive 95/46/ EC (General Data Protection Regulation). Available online: https://eur-lex.europa.eu/ legal-content/EN/TXT/?uri=CELEX%3A32016R0679 Gerver, D. (1976), ‘Empirical Studies of Simultaneous Interpretation: A Review and a Model’, in R. W. Brislin (ed), Translation. Applications and Research, 165–207, New York: Gardner Press. Gile, D. (2016), ‘Experimental Research’, in C. Angelelli and B. J. Baer (eds), Researching Translation and Interpreting, 220–8, New York: Routledge. Glaser, B. G. and A. L. Strauss (1967), The Discovery of Grounded Theory: Strategies for Qualitative Research, Chicago: Aldine. Greere, A. (2012), ‘The Standard EN 15038: Is there a Washback Effect on Translation Education?’, in S. Hubscher-Davidson and M. Borodo (eds), Global Trends in Translator and Interpreter Training: Mediation and Culture, 45–66, New York: Bloomsbury. Harvey, C.M., R. J. Koubek, A. Darisipudi and L. Rothrock (2011), ‘Cognitive ergonomics’, in K.-P. L. Vu and R. W. Proctor (eds), Human Factors and Ergonomics: Handbook of Human Factors in Web Design, 2nd edn, 85–105, London: CRC Press. Hyman, P. (2014), ‘Speech-to-Speech Translations Stutter, but Researchers see Mellifluous Future’, Communications of the ACM, 57 (4): 16–19. ISO17100 (2015), Translation Services – Requirements for Translation Services, Geneva: ISO.

Core Research Questions and Methods

33

Israel, M. (2015), Research Ethics and Integrity for Social Scientists 2nd edn, Thousand Oaks, CA: SAGE Publications. Jacko, J. A., ed. (2013), Human Computer Interaction Handbook: Fundamentals, Evolving Technologies, and Emerging Applications, New York: CRC Press. Jemielity, D. (2018), ‘Translation in Intercultural Business and Economic Environments’, in S.-A. Harding and O. Carbonell Cortés (eds), The Routledge Handbook of Translation and Culture, 533–57, New York: Routledge. Jensen, A. (1999), ‘Time Pressure in Translation’, Copenhagen Studies in Language, 24: 103–19. Jiménez-Crespo, M. and N. Singh (2016), ‘International Business, Marketing and Translation Studies: Impacting Research into Web Localization’, in Y. Gambier and L. van Doorslaer (eds), Border Crossings: Translation Studies and Other Disciplines, 245–62, Amsterdam: John Benjamins. Katan, D. (2009), ‘Occupation or Profession: A Survey of the Translators’ World’, Translation and Interpreting Studies, 4 (2): 187–209. Kearns, J. (2008), ‘The Academic and the Vocational in Translator Education’, in J. Kearns (ed), Translator and Interpreter Training: Issues, Methods, and Debates, 184–214, London: Continuum. Köpke, B. and T.M. Signorelli (2012), ‘Methodological Aspects of Working Memory Assessment in Simultaneous Interpreters’, International Journal of Bilingualism, 16 (2): 183–97. Koskinen, K. (2008), Translating Institutions: An Ethnographic Study of EU Translation, New York: Routledge. Krippendorff, K. (2004), ‘Reliability in Content Analysis: Some Common Misconceptions and Recommendations’, Human Communication Research, 30 (3): 411–33. Krippendorff, K. (2012), Content Analysis: An Introduction to Its Methodology, 3rd edn, Thousand Oakes, CA: SAGE Publications. Latour, B. (2005), Reassembling the Social: An Introduction to Actor-Network Theory, New York: Oxford University Press. Leblanc, M. (2013), ‘Translators on Translation Memory (TM): Results of an Ethnographic Study in Three Translation Services and Agencies’, Translation & Interpreting: The International Journal of Translation and Interpreting Research, 5 (2): 1–13. Lee, J. (2017), ‘Professional Interpreters’ Job Satisfaction and Relevant Factors: A Case Study of Trained Interpreters in South Korea’, Translation and Interpreting Studies, 12 (3): 427–48. Libovický, J., T. Brovelli Meyer and B. Cartoni (2018), ‘Machine Translation Evaluation beyond the Sentence Level’, in J. A. Pérez-Ortiz, F. Sánchez-Martínez, M. EsplàGomis, M. Popovíc, C. Rico, A. Martins, J. Van den Bogaert and M. L. Forcada (eds), Proceedings of the 21st Annual Conference of the European Association for Machine Translation, 179–88, Alacant. http://eamt2018.dlsi.ua.es/proceedings-eamt2018.pdf

34

The Bloomsbury Companion to Language Industry Studies

Liu, M., D. L. Schallert and P. J. Carroll (2004), ‘Working Memory and Expertise in Simultaneous Interpreting’, Interpreting, 6 (1): 19–42. Malamatidou, S. (2018), Corpus Triangulation: Combining Data and Methods in Corpusbased Translation Research, New York: Routledge. Marín García, A. (2017), ‘Theoretical Hedging: The Scope of Knowledge in Translation Process Research’, Doctoral diss., Kent State University, Kent, OH. Mason, M. (2008), Courtroom Interpreting, Lanham, MD: University Press of America. McCroskey, J. C. (1982), An Introduction to Rhetorical Communication, 4th edn, Englewood Cliffs, NJ: Prentice-Hall. Mellinger, C. D. (2015), ‘On the Applicability of Internet-mediated Research Methods to Investigate Translators’ Cognitive Behaviour’, Translation & Interpreting, 7 (1): 59–71. Mellinger, C. D. and T. A. Hanson (2017), Quantitative Research Methods in Translation and Interpreting Studies, New York: Routledge. Mellinger, C. D. and T. A. Hanson (2018), ‘Interpreter Traits and the Relationship with Technology and Visibility’, Translation and Interpreting Studies 13 (3): 366–92. Mourby, M. E. Mackey, M. Elliot, H. Gowans, S. E. Wallace, J. Bell, H. Smith, S. Aidinlis and J. Kaye (2018), ‘Are ‘pseudonymised’ data always personal data? Implications of the GDPR for administrative data research in the UK’, Computer Law & Security Review, 34 (2): 222–33. Muñoz Martín, R. (2016), ‘Of Minds and Men – Computers and Translators’, Poznań Studies in Contemporary Linguistics, 52 (2): 351–81. Oakes, M. P. and M. Ji (2012), Quantitative Methods in Corpus-based Translation Studies, Amsterdam: John Benjamins. O’Brien, S. (2010), ‘Eye-tracking in Translation Process Research: Methodological Challenges and Solutions’, Copenhagen Studies in Language, 38: 251–66. O’Brien, S. (2013), ‘The Borrowers: Researching the Cognitive Aspects of Translation’, Target, 25 (1): 5–17. O’Hagan, M. and C. Mangiron (2013), Game Localization: Translating for the Global Digital Entertainment Industry, Amsterdam: John Benjamins. Ozolins, U. (2007), ‘The Interpreter’s ‘Third Client’: Interpreters, Professionalism and Interpreting Agencies’, in C. Wadensjö, B. Englund Dimitrova and A.-L. Nilsson (eds), The Critical Link 4: Professionalisation of Interpreting in the Community, 121–31, Amsterdam: John Benjamins. Panter, A. T. and S. K. Sterba, eds (2011), Handbook of Ethics in Quantitative Methodology, New York: Routledge. Pokorn, N. (2017), ‘“There is Always some Spatial Limitation”’: Spatial Positioning and Seating Arrangement in Healthcare Interpreting’, Translation and Interpreting Studies, 12 (3): 383–404. Politou, E., E. Alepis and C. Patsakis (2018), ‘Forgetting Personal Data and Revoking Consent under the GDPR: Challenges and Proposed Solutions’, Journal of Cybersecurity 4 (1): 1–20.

Core Research Questions and Methods

35

Pym, A., F. Grin, C. Sfreddo and A. L. J. Chan (2011), The Status of the Translation Profession in the European Union, London: Anthem Press. Risku, H., T. Pichler and V. Wieser (2017), ‘Transcreation as a Translation Service: Process Requirements and Client Expectations’, Across Languages and Cultures, 18 (1): 53–77. Rodríguez Vázquez, S. and S. O’Brien (2017), ‘Bringing Accessibility into the Multilingual Web Production Chain: Perceptions from the Localization Industry’, in M. Antona and C. Stephanidis (eds), UAHCI 2017: Universal Access in Human– Computer Interaction. Design and Development Approaches and Methods, 238–57, Cham: Springer. Rojo López, A. and M. Ramos Caro (2016), ‘Can Emotion Stir Translation Skill? Defining the Impact of Positive and Negative Emotions on Translation Performance’ in R. Muñoz Martín (ed.), Reembedding Translation Process Research, 107–29, Amsterdam: John Benjamins. Sager, J. (1994), Language Engineering and Translation: Consequences of Automation, Amsterdam: John Benjamins. Saldanha, G. and S. O’Brien (2013), Research Methodologies in Translation Studies, New York: Routledge. Shreve, G. M. and E. Angelone (2010), ‘Translation and Cognition: Recent Developments’, in G. M. Shreve and E. Angelone (eds), Translation and Cognition, 1–13, Amsterdam: John Benjamins. Steiner, E. (2012), ‘Methodological Cross-fertilization: Empirical Methodologies in (Computational) Linguistics and Translation Studies’, Translation: Corpora, Computation, Cognition, 2 (1): 3–21. Stevens, S. (1946), ‘On the Theory of Scales of Measurement’, Science, 103 (2684): 677–80. Wadensjö, C. (1998), Interpreting as Interaction, London: Longman. Wooldridge, J. M. (2015), Introductory Econometrics: A Modern Approach, 6th edn, Boston: Cengage. Wurm, S. and J. Napier (2017), ‘Rebalancing Power: Participatory Research Methods in Interpreting Studies’, Translation & Interpreting, 9 (1): 102–20. Zwischenberger, C. (2017), ‘Professional Self-perception of the Social Role of Conference Interpreters’, in M. Biagini, M. S. Boyd and C. Monacelli (eds), The Changing Role of the Interpreter: Contextualising Norms, Ethics and Quality Standards, 52–73, New York: Routledge.

36

3

Researching workplaces Hanna Risku, Regina Rogl and Jelena Milošević

1. Introduction Translation and interpreting are increasingly being studied from what Chesterman (2009) suggests could be called the ‘translator studies’ perspective, which is a research area that does not focus so much on the translation/ interpreting product, but on the person of the translator or interpreter. In the 1980s, translators and interpreters were thus conceptualized as experts of crosscultural communication (Snell-Hornby 1988) and translational action (HolzMänttäri 1984; Vermeer 1989). Sociological approaches to translation studies have dealt with translator/interpreter agency, their professional status and occupational conditions and the social networks and power relations in which they are engaged. Cognitive approaches have tried to pin down what goes on in the minds of translators/interpreters during the translation/interpreting process, how they make decisions, deal with problems and structure their work, how they use tools and how affective factors or their personalities influence their work. Analysing how translators and interpreters work, and, more so, drawing not merely on prototypical or ideal work situations, but describing them based on findings from authentic work situations, is not yet a long-established research tradition. However, there is a growing body of research – mostly in sociological and cognitive translation/interpreting research – that could be subsumed under the heading of translation and interpreting workplace research. This field of research takes the workplaces of translators, interpreters and other language service providers as analytical units and endeavours to retrace their work-related activities, interactions and working conditions as carried out and experienced in day-to-day professional practice. Current key areas in translation and interpreting workplace research include, for example, work organization and routines; cooperation and social dynamics; cognitive, organizational and

38

The Bloomsbury Companion to Language Industry Studies

physical ergonomics; and the use and implications of (collaborative) technology in the workplace. In the following, we will present a number of different theoretical frameworks that provide a basis and motivation for the study of translation, interpreting and other related processes in the workplace. We will then discuss some of the methodological challenges of carrying out workplace research in our field, identify current areas of application and research, look at possible future trends and describe our view on the potential for research and industry cooperation in this area.

2.  Focal points in research 2.1.  Theoretical frameworks for workplace research The research traditions in which translation and interpreting (T/I) have been studied directly at the workplace primarily include cognitive, sociological and ergonomic T/I research.

2.1.1.  Cognitive approaches From a cognitive perspective, the motivation for workplace research stems from approaches that centre on the concept of situatedness. These emphasize interaction with the social and physical environment as an inextricable constituent of the systemic unit of cognition. Several research traditions that share this embodied, embedded, extended and enacted (4E) view of cognition have developed in cognitive science over the last decades. However, since we do not have space here to discuss them all, we will restrict ourselves to providing an overview of some of their central ideas regarding workplace research. One of the approaches in the 4E cognition framework is Hutchins’s (2010) cognitive ecology theory. Hutchins observes a shift within cognitive science from analysing the properties of the elements of cognitive systems towards studying the dynamic patterns of interactions. Furthermore, rather than being studied as a logical process, cognition is being analysed as a biological phenomenon. These shifts move the attention of cognitive science towards cognitive ecosystems (Hutchins 2010) as the assembly of brains, bodies and environmental elements that interact to enable viable action. In T/I research, this view implies that the cognitive patterns and regularities observed in a laboratory or classroom setting might differ in important ways

Researching Workplaces

39

from those that become visible in workplace research. As noted by EhrensbergerDow (2014), the interactions of translators and interpreters with relevant social contacts, technological aids, normative constraints and other work aspects form integral parts of the cognitive processes at the T/I workplace. As Hutchins (2010: 707) explains, the cognitive ecology approach brings ‘culture, context, history and affect’ back into the scope of cognitive research, aspects which were explicitly excluded in the information processing view of cognition that prevailed in the 1980s (Gardner 1987). For a cognitive account of T/I, these aspects do indeed seem very relevant. At the very least, it is inspiring and productive to think of the T/I workplace as an ecosystem in which translators and other relevant actors cope with the challenges of their work and in which they have to ‘survive’ as professionals. Another example in the 4E research tradition is the extended cognition approach developed by Clark and Chalmers (1998), which also proposes not only that elements external to the brain influence cognition but, moreover, that the active linkage between the brain and the features of the environments it is coupled to during cognition actually constitutes the cognitive processing. In this respect, there is a clear similarity between the extended cognition and cognitive ecology approaches: the coupled system does the thinking and reasoning – it is the active cognitive unit per se. The cognitive abilities of the system change, diminish or break down if a relevant environmental element is removed from the process, just as they do if we remove part of the brain (Menary 2010: 3). Bearing in mind the intensive technological mediation of translation (e.g. through translation memories, text processing software, web research tools and project management tools), this notion of a coupled system would seem to apply well to translation as a work process: many parts of the decision-making, problem solving, remembering and coordinating involved in translation would be very different without access to these external components (O´Brien 2012). By shifting major parts of the cognitive process to bodily movements, interaction with artefacts and the spatial organization of the workplace, translators can reconfigure their cognitive space (Risku 2014: 349). This externalization – or, rather, ‘interactionalization’ – of cognitive processing seems to correspond to a human quest to shift cognitive processing load to external components and thus free up cognitive capacities, especially in complex cognitive activities. Such offloading of supposedly internal processing like memory and decision-making happens, for example, when a translator writes the chosen target term on a piece of paper or updates her terminology database in case the term is needed again, or when she writes a comment for the client so that the final decision can be

40

The Bloomsbury Companion to Language Industry Studies

made with/by them. Keeping track of the status of a large translation project is another example of a cognitive challenge that is met by the use of external tools like paper checklists and project management software. The extended cognition approach hence proposes that it is the functionality in a cognitive process rather than the location of a process (inside/outside the brain) that defines whether a part of it is cognitive (Menary 2010: 6). The extended cognition approach has inspired one of its founders, Andy Clark, to further explore the ‘supersizing’ of the minds (2008) of humans as ‘natural-born cyborgs’ (2003) coupled to the available environmental resources. We think that these concepts provide worthwhile avenues for further research on the cognitive basis of translation.

2.1.2.  Sociological approaches The study of translation workplaces and work practices has also been approached from different sociological perspectives. In this context, frameworks borrowed from sociology of work and industry or from organizational studies might seem the obvious choice, since work practices form their core object of study. However, there are actually only a few researchers who have oriented their work on such theories. These include Kuznik and Verd (2010), with their application of a sociological model of workload, and Kuznik (2016), with her framework that combines sociology of work and organizational ergonomics. The reason for this might lie in the traditionally stronger focus in translation studies on conceptualizing translation, interpreting and related practices more as (non-) professional practices and less as work (for a sociology of professions-oriented approach to translation, see Monzó 2006; Tyulenev 2015). Although a number of studies (examples of which are described below) on the professional status and occupational conditions of translators and interpreters do not directly consider their workplaces, they nonetheless provide essential macro-level framing that helps us to obtain a broader sense of the socioeconomic embeddedness of T/I work and workplaces. In sociologically oriented translation research, translation is also approached using actor-network theory (Buzelin 2005, 2007; Abdallah 2014) or Bourdieu’s habitus/field theory (Vorderobermeier 2013, 2014; Hanna 2016). Bourdieusian theories are seldom used to explore and conceptualize actual T/I workplaces, probably due to the fact that their prior foci have been on T/I habitus, forms of capital and social field of action as the central element of analysis. Depending on the level of analysis selected for a specific study, actor-network theory can, in contrast, serve as a valuable theoretical–methodological tool for studying

Researching Workplaces

41

the development of sociality within networks ‘bottom-up’ by following the interactions of actors in their respective environments. The underlying assumption is that social categories are not defined ‘top-down’ by pre-existing social categories, such as class and social identity, but develop interactively and performatively as actors become connected to other actors and thus form dynamic networks. Actor-network theory (Latour 1987, 2005) shares some common features with the cognitive scientific approaches discussed above. The network units analysed are heterogeneous assemblies of human and non-human agents, including artefacts such as physical and digital aids. A symmetric relation is assumed between the elements of the system, so that not only humans but also other kinds of network units can assume agency in the dynamics of the system. It is the active relationships between the components, not their internal features, that are viewed as constitutive of the network. The system is in a constant state of change, and active relations have to be constructed and reconstructed, otherwise the unity of the system collapses. Actor-network theory is especially apt in analysing the reciprocal development of (cultural and technological) artefacts and social factors in the T/I context: the development and use of technology is influenced by society and, in turn, also itself influences society. Actor-network theory can thus help to explain how translation or interpreting as a process, product, service and industry is influenced by cognitive, technological and social factors and vice versa. Another sociological framework that can prove useful for T/I workplace research is practice theory. First applied by Olohan (2017) in her study of an in-house translation department, practice theory serves to explore translation work as a set of interwoven practices and knowings. Translation practice, in this framework, is understood and analysed as embedded in specific spatial and temporal situations and as materially and discursively mediated practice.

2.1.3.  Ergonomic approaches With two conferences at Stendhal University in Grenoble (in 2011 and 2015) and the corresponding special editions of the journal ILCEA (Lavault-Olléon 2011b, 2016) translation studies scholars also turned to ergonomic theories, concepts and methods and have since endeavoured to incorporate these into translation workplace research. The goal of such ergonomically oriented research is to put the people – their particular needs, the challenges they face and their wellbeing – back at the centre of work-related research (Lavault-Olléon 2011a: 6). In contrast to many of the above-mentioned frameworks, ergonomics studies

42

The Bloomsbury Companion to Language Industry Studies

explicitly attempt to provide an applied perspective, thus not only serving critical analysis but also offering best-practice insights that might flow back into the object of study and improve on the observed praxis (Kuznik 2016: 2–3). This is achieved in a holistic approach that studies the people themselves and their individual work tools, routines and environments, then analyses the relationships between them from three different yet interdependent angles, namely a social or organizational, a cognitive and a physiological perspective (Lavault-Olléon 2011a: 6; see also Ehrensberger-Dow and Hunziker-Heeb 2016). The observed activities are studied within their local frameworks of interaction and specific (material) environments (Lavault-Olléon 2011a: 7). Human–computer interaction (HCI) is, for instance, seen in part as a subfield of cognitive ergonomics and can provide a framework that is particularly useful for the study of highly technologized workplaces (see O’Brien 2012 for an introduction to translation as HCI). The above-mentioned theoretical frameworks and their quest to study authentic work situations and processes pose specific challenges when it comes to research designs and thus continue to require methodological innovation in T/I research. Whereas methods like the analysis of texts and corpora, interview/ survey studies and lab/classroom experiments are established approaches in T/I research methodology, entering the workplace is a challenge that is still in its infancy. In the next section, we will look briefly at the range of methods used in T/I workplace research and discuss the most relevant methodological issues it faces (see also Risku, Rogl and Milošević 2017b).

2.2.  Methodological developments The data collection methods used in workplace research depend – as always – on the particular topic of interest. Specific to workplace research is the fact that the practice under study is seen as ‘lived work’ (Button and Harper 1996: 272; see also Bergmann 2005; Clancey 2006) that has to adapt to the contingencies of a specific situation. To obtain differentiated insights into a specific instance of lived work, researchers endeavour to identify, analyse and, where appropriate, contrast the following different yet intertwined perspectives: (1) normative descriptions of a specific work reality (e.g. explicit process instructions, quality management guidelines and work specifications), (2) the workplace praxis observed and (3) the interpretations or rationalizations of this praxis by the various individuals or social groups involved. In order to do justice to these three perspectives and to grasp particular work realities, workplace research and especially workplace

Researching Workplaces

43

studies is thus often conducted using an ethnographic approach. This frequently includes the use of observational methods (e.g. participant observation with the observers present and involved in the situation observed, or video recording), which can be complemented with a variety of additional data acquisition methods, such as interviews, questionnaires or the analysis of documents (e.g. organizational documentation), written communication or artefacts (see Knoblauch 2000; Knoblauch and Heath 2006). In T/I workplace research, there is an increasing body of literature that is based on ethnographically oriented research designs (Flynn 2010). A closer look at examples of such designs (see next section) shows that the methods chosen inevitably influence the degree of involvement of the researcher(s) in the field, thus resulting in an ‘etic’ (outsider) or ‘emic’ (insider) perspective. While some researchers opt for interview-based studies (e.g. Flynn 2004; Abdallah 2011), others try to base their research designs primarily on observational methods (e.g. participant observation, video/audio recording), also in combination with interviewing or/and artefact analysis (e.g. Buzelin 2006; Kinnunen 2010; Dong and Turner 2016; Risku 2016; Olohan and Davitti 2017; and Risku, Milošević and Rogl 2017a), each taking a slightly different kind of (non-)participant stance. The highest level of personal involvement in a work reality has been attained in a small, but growing, set of autoethnographic studies in which the researchers are totally immersed in the investigation process in their capacity both as researchers and as research objects (e.g. Hurd 2010; Hokkanen 2016). Generally, all this research depends heavily on what ethnography refers to as ‘access to the field’. To be able to implement such studies, researchers must first have the possibility to approach the respective workplaces and obtain permission to study them. Accordingly, the first hurdle is to find individuals, organizations, companies and/or institutions who are willing to participate in workplace studies and who are also relevant to the actual research. Time and effort may well need to be put into communicating the research goals and gaining the trust of the target participants in order to gain access to authentic work situations (see Risku 2017). Even when this has been achieved, there still might be parts of the work reality in an organization or a team that the researcher(s) can and will (not) be granted access to (because, for example, the observation periods take place at certain times of the day, the observers are excluded from specific meetings or are only permitted to observe certain internal forms of communication, some members of the team work at home or are located abroad, etc.). Our current knowledge of T/I workplaces is, of course, not based exclusively on insights obtained through ethnographic studies. Indeed, these usually study

44

The Bloomsbury Companion to Language Industry Studies

individual – in most cases, clearly and more narrowly defined – work realities and strive more for maximum richness and depth of understanding of a specific socially embedded situation than for – in this case methodologically unattainable – generalizability. Accordingly, translation workplace scholars also use a variety of other qualitative and quantitative methods to obtain data on a wide range of different analysis levels, with varying degrees of structure and formalization and with different levels of specificity – from broad to totally situation specific (such as triangulation, see, for example, Flick 2011, or mixed-methods approaches, see Kuckartz 2014). For instance, the insights gained from various survey-based studies in different areas of the field (e.g. Angelelli 2004; Berber-Irabien 2010; Gaspari, Almaghout and Doherty 2015; Angelone and Marín García 2017; for a review of survey research among conference interpreters, see Pöchhacker 2009) are quantifiable at least to some extent. Large-scale surveys like those frequently carried out by institutions or industry associations (see Section 2.3) can likewise serve well to supplement more in-depth, micro-level studies. To investigate workplaces/workplace-related issues, researchers also draw on and adapt methodological frameworks more typically used in experimentbased process research (e.g. keystroke logging, screen recording or eyetracking; see Teixeira and O’Brien 2017 for a study combining these three methods). These research designs combine authentic workplaces, tools and processes with prescribed work assignments to translators/interpreters. This ensures greater controllability and comparability of the data, yet still allowing important conclusions to be drawn regarding the work practices of translators/ interpreters or the factors that influence them. These methods have also been used in combination with observational methods (e.g. Ehrensberger-Dow 2014; Ehrensberger-Dow and Massey 2017). The use of methods that require a specific technology does, however, raise its own challenges, such as the compatibility and interoperability with the technological equipment used by practitioners in their daily work practice (see Ehrensberger-Dow 2014). Future research designs could look to reproduce previous studies under different conditions (e.g. in different countries or markets, fields of application, organizational settings and employment status) in order to permit a comparison between different translator work realities or local/global trends. A look at the workplace-related studies that have already been carried out reveals a clear scarcity of longitudinal studies that revisit specific workplaces over a multi-year timeframe in order to grasp organizational, social and technological changes within organizations and teams and thus be able to retrace the long-term dynamics in the industry (for one of the very few such long-term field studies,

Researching Workplaces

45

see Risku’s research on a translation agency with data collection in 2001, 2007 and 2014; see Risku et al. 2013; Risku 2016).

2.3.  Areas of application and research objects Let us now turn to the research objects that scholars have endeavoured to study in translation workplace research to date. We would, however, like to point out in advance that we are only able to offer a brief selection of examples here, which we have structured as far as possible based on the analysis level (from the macro to the micro level, that is, from company, organization and team to individual) used in the studies themselves. The question of what current T/I workplaces look like is difficult to answer without first also considering the settings and parameters under which translation agencies, freelancers and corporate language departments currently operate. Globalization and the network economy are constantly reshaping business practices in the language industry and thus restructuring translation work processes. They bring forth new forms of work and occupation, with digitization speeding up the change in translation work and professions. Several large-scale studies provide insights into both the state of the industry and the changes in operational practices (see, for example, Rinsche and Portera-Zanotti 2009; ELIA, EUATC and GALA 2016, 2017 mostly for the European market) or the development of occupational conditions, certification or industry dynamics (with more specific facts and figures mostly from the European Union (EU), but also some comparative data from the United States and Canada; cf. Pym et al. 2014). They all confirm that translation production networks have increased in complexity over the last decades, mostly due to increasingly decentralized organizational structures that go hand-in-hand with increasing outsourcing (subcontracting) and offshoring of translation services (Gouadec 2007; Rodríguez-Castro 2013: 38). For many translators and interpreters, cooperation with agencies thus becomes their standard model for language service provision. These studies also indicate that agencies are becoming increasingly less likely to employ their own in-house translators, choosing instead to rely on a distributed network of freelance translators. In addition, their staff and management now mostly have different professional backgrounds than translation or interpreting (cf. Kuznik and Verd 2010; Hébert-Malloch 2004). The work of translators is thus embedded in networks that involve more than the frequently depicted dyadic or triadic relationships of translator, client and proofreader. In fact, translator workplaces now encompass a variety of involved actors and organizations – such

46

The Bloomsbury Companion to Language Industry Studies

as translators, editors, proofreaders, accounting and Information Technology (IT) departments (Risku 2016) – and can best be described as virtually distributed and multi-sited, with agencies and their project managers assuming an increasingly important role as gatekeepers between clients and translators (Risku 2016; Olohan and Davitti 2017; Rodríguez-Castro 2013). Similar observations have been made in the field of interpreting. In their study of public-service interpreting provision in the UK, Dong and Napier (2016) and Dong and Turner (2016) show that ‘as a vital part of contemporary workplace, many agencies do not merely act as neutral go-betweens but exercise various modes of control over the non-standard workforce’ (Dong and Turner 2016: 99). This calls for an extended notion of the interpreter workplace that goes ‘beyond the space where communication-mediation tasks are performed, to where interpreting services are planned, organised and managed’ (Dong and Turner 2016: 119). Agencies assume a central position of power in this structure: they dictate the employment/contract conditions and decide who is included in the pool of freelancers – frequently not on grounds of merit, but for completely different reasons, such as best fit to a given customer, mobility (Dong and Turner 2016) or trainability in company-related matters (Dong and Napier 2016). This leads to a complex relationship between freelancers and agencies as their ‘third clients’, where freelancers might – in a less-than-ideal case – be confronted with inflexible company rules that decrease or even remove their work sovereignty and individual, situation-specific capacity to make decisions. The role of trust (Abdallah 2012; Olohan and Davitti 2017; Alonso 2016) and the importance of organizational communication, information patterns and collective decision-making (Abdallah 2012) in such production networks is thus a central research topic in this context. In a similar vein, Ehrensberger-Dow and Massey (2017) show how organizational culture and the possibilities for self-determination open to translators/interpreters in their jobs can also be an important factor for organization development and the success of sociotechnical change in organizations. Social dynamics in teams is another important focus in T/I workplace research. Research has shown that translation is essentially a collaborative endeavour. Since most translations are produced in networks of agents who usually do not work in the same place, researchers have addressed the complexity of such teams and the roles that agents can assume in them (Buzelin 2004; Risku 2016; Risku, Rogl and Pein-Weber 2016). As an aspect of translators’ work, creativity, too, can be explored from an interactional perspective, as shown by Risku, Milošević and Rogl (2017a); it is not just a characteristic of individuals, but also shows itself on

Researching Workplaces

47

the domain, organization and group levels. For the field of literary translation, Kolb (2017) shows how translation processes are always interactional, even when translators work on their own from home, revealing the many hidden voices from a translator’s informal networks, friends and spouses that can play their part in shaping the translation process. Interaction in workplaces has also been addressed in sign language interpreting research. In her study of interpreting in the workplace, Dickinson (2017) demonstrates the multifaceted layers of interaction in communicative events that involve a variety of actors. In contrast, Kinnunen (2013) shows the detriments of lack of cooperation and shared information in the context of court interpreting in Finland. A large part of translators’/interpreters’ professional networks are now virtual in nature, serving as essential resources for networking, subject-matter expertise, training, job assignments, and terminological resources (McDonough 2007; Mihalache 2008; Risku and Dickinson 2009). Translators are also increasingly involved in collaborative virtual teams, especially when working on complex high-volume translation projects (Stoeller 2011). This poses new challenges, especially on a social interaction level: ‘Global virtual teamwork has resulted in new team dynamics and a work environment characterized by a lack of interpersonal relationships, a lack of face-to-face communication, a lack of social events to build trust, and a lack of close supervision, among other factors’ (Rodríguez-Castro 2013: 39–40). Given the rapid changes in the professional – and frequently client- and/ or project-specific – demands on translators/interpreters, the calls for greater specialization and breadth and the need for increased technical competence (Massey and Ehrensberger-Dow 2011), many translators/interpreters learn primarily on the job. Researchers have thus also looked both at how they acquire the necessary knowledge in and/or through practice and at the content, processes and situative factors that cannot be acquired through training. Angelone and Marín García (2017) explore translator and project manager expectations of expertise in translation, shedding light on how it is conceptualized and fostered from within the language industry. Olohan (2017) points to the collective nature of knowing as it is established in a specific workplace as well as to its embodied, materially and discursively mediated character. Dong and Turner (2016), in turn, demonstrate how ‘procedural knowledge’ is vital for interpreters since it decreases the levels of uncertainty and stress related to an interpreting assignment. They also point out that interpreters often do not receive any information or training on workplace risk assessment or the specifics of the workplace, such as information on social work settings, where the assignments

48

The Bloomsbury Companion to Language Industry Studies

may take place at clients’ homes or involve mentally unstable or potentially violent clients. Another focus in workplace research lies on the tools and resources that translators/interpreters actually use. In this context, Risku (2016) shows the range of artefacts (both analogue and digital) used in a translation agency, with a particular focus on changes in technological and corresponding administrative practices over a seven-year observation period (see also Risku et al. 2013). Researchers have also sought to shed light on how translators view technology (Grass 2011; Gough 2011), how they deal with an increasingly technologized working environment and the problems they face in everyday work practice (e.g. Olohan 2011; LeBlanc 2013; Estellés and Monzó Nebot 2015). Pym (2011), Bundgaard, Paulsen Christensen and Schjoldager (2016) and Christensen and Schjoldager (2016) focus in particular on how translators use CAT tools, while LeBlanc (2013), Krüger (2016) and Teixeira and O’Brien (2017) provide insights into translation tools from a usability/ergonomics perspective. Several studies also point out that technological change may entail changes in administrative or work practices, thus profoundly restructuring the workplaces, work processes, communication and collaboration patterns and social/trust dynamics of translators and translation project managers (e.g. Karamanis, Luz and Doherty 2011 for changes related to the introduction of machine translation in an organization; LeBlanc 2017 for changes related to the introduction of CAT tools). Ehrensberger-Dow and Massey (2017) show how the willingness (or lack thereof) to adopt technology and adapt to changes relies heavily on organizational culture and the level of self-determination translators perceive themselves to have. Grass (2011) investigates the ergonomics of commercial versus free translation software as used by freelance translators, broadening the scope of this discussion by explaining how the software chosen by translators might affect their collaboration with agencies and clients and thus also their chances of ‘survival’ in a globalized, competitive language industry. In a similar vein, Toudic and Brébisson (2011) opt for an ergonomic perspective on tools in translator workplaces with an additional focus on the impact that different technology options might have on the needs of translators, translation agencies and clients. Several researchers, such as Brunette and O’Brien (2011), O’Brien et al. (2014), Cadwell et al. (2016), Martikainen and Kübler (2016) and Bundgaard (2017), look at new working practices, such as the integration of machine translation and post-editing. Following on from this brief overview of workplace-related research findings at industry, organization and team levels, we will now take a look at research into various factors that influence the individual translator/interpreter workplace.

Researching Workplaces

49

Early research into interpreting already sought to define a range of parameters that can influence and/or impair the workplaces of interpreters. Their focus lay to begin with primarily on simultaneous interpreting and in particular on the study of the environmental conditions in interpreting booths, such as the lighting (Steve and Siple 1980), CO2 levels (Kurz 1983) or temperature (Kurz and Kolmer 1984). For a long time, the translator as a person was not the primary subject of translation studies research. When looking at topics like work equipment or tools, research generally tended to focus on an instrumental basic concept. However, there is also a growing body of literature that investigates the conditions translators face in their workplaces from an ergonomic perspective (LavaultOlléon 2011b, 2016; Ehrensberger-Dow and Hunziker Heeb 2016). Such research looks at topics relating to the people doing the work, for example, the forms the work equipment and tools used by translators need to take to best accommodate the requirements of their work (Teixeira and O’Brien 2017). This includes, for instance, how tools (computers, screens, keyboards and peripherals) should best be designed to prevent (or at least limit) any potentially damaging effects of poor body posture or un-ergonomic tool design (Pineau 2011; Ehrensberger-Dow et al. 2016; Meidert et al. 2016). It also considers how topics like healthy work practices and mindfulness of one’s own well-being can best be incorporated into translator education (Peters-Geiben 2016). Workplace research is not, however, restricted to the physical and spatial conditions at the workplace; it also looks at psychological and emotional wellbeing. In interpreting research, for instance, scholars already began at an early stage to look at topics like occupational stress and its causes and physiological effects (e.g. Williams 1995; Kurz 1997, 2002, 2003), workload (AIIC 2002) and burnout (Bower 2015). This gradually expanded from an initial focus on conference interpreting to other areas, like sign language interpreting (Massmann 1995; McCartney 2006; Schwenke, Ashby and Gnilka 2014), community interpreting (Norström, Fioretos and Gustafson 2012) and remote interpreting (Roziner and Shlesinger 2010), and has also looked at more settingspecific aspects like emotional stress (Valero Garcés 2015), vicarious trauma and secondary traumatic stress (Bontempo and Malcolm 2012). While translation research did not initially look at the stress factor in as much detail as its interpreting counterpart, an increasing amount of the more recent work in this field has considered topics like workload or job satisfaction, for example, as a consequence of translator visibility (Liu 2013) or in connection with personality or trait emotional intelligence (Hubscher-Davidson 2016).

50

The Bloomsbury Companion to Language Industry Studies

Rodríguez-Castro (2015) has also developed an instrument that can be used to assess ‘translator satisfaction’ in a globalized language industry (see also Rodríguez-Castro 2013).

3.  Informing research through industry Although there are presumably many more, we would like to mention just four central reasons why workplace research cannot get along without working in close cooperation with industry. 1. Learning from one another: Workplace research is one of those fields of research that inevitably brings academia and (the language) industry together and facilitates a mutual exchange that can serve to benefit both sectors. While it might seem obvious that real workplaces are the best place to study work praxis, the fact that this field of research is still so new in our discipline does, however, suggest that learning from one another has in the past clearly often taken place via less direct routes or, for various reasons, even not at all. Unlike in other sectors, the exchange between research and industry in the field of workplace research does not take place at conferences or between experts, but rather in the course of the praxis/research itself. Workplace research enables, and often requires, researchers to become part of the field under study and to analyse translation practices as they are carried out and perceived by the practitioners who are currently involved in them. This bottom-up approach to research allows us to explore a variety of factors that only become visible and accessible for research through coming into contact with practitioners and stepping into the field. Laboratory or classroom experiments, and even surveys, often provide a controlled but rather limited perspective on a set of aspects that researchers have explicitly chosen to study. They do not incorporate the situated, embedded and embodied nature of translation and interpreting in real-world workplaces. Such studies do not yield in-depth insights into the work routines of translators (not just their typical routines but also those which emerge in times of stress or are influenced by situation-specific factors) or their actual use of technology, nor do they reveal how translators are involved in large virtual networks or how teams collaborate and coordinate their

Researching Workplaces

51

work. Conducting research in and into authentic environments and situations thus complements the current state of knowledge gained through other methodological frameworks. It enables researchers to grasp the intrinsic logic of the work processes and to understand them from the emic, insider perspective (Bergmann 2005). 2. Being authentic: Grounding T/I research in systematic empirical observations of the developments in the field is one possible way to legitimate critical analysis and reflection by researchers and to perhaps even allow them to help alter such developments. Workplace research gives voice to the practitioners and to their views on the factors that make up their workplaces, thus making them not merely passive objects of study but active participants. Their views can potentially contradict existing research positions and thus encourage researchers to rethink their views of the work realities of translators and interpreters. 3. Keeping a finger on the pulse: Given the fast pace of the language industry and the standard work practices in the market, it is imperative that researchers have the possibility to gain constant access to the current work praxis of translators and interpreters. Long-term studies in particular can contribute to identifying trends and to confirming or refuting these for specific areas or work realities. Risku’s long-term study in a translation agency (data collection in 2001, 2007 and 2014; see Risku et al. 2013 and Risku 2016 for an account of the first two observation periods), for instance, revealed that the evolution of its business and administrative practices and translation processes did not actually follow expected trends or linear developments. By conducting studies in an industry setting, that is, at the places where the translation and interpreting activities take place, and by revisiting the field regularly, research gains and maintains an insight into developments that are driven, for instance, by globalization and virtualization and can thus take the increasing diversity and/or fragmentation of translation practice(s) into account. 4. Improving teaching and learning: There are several reasons why industry requirements cannot be directly translated into study programmes. Industry needs are dynamic and volatile, while university programmes prepare students for the future and not the current employment market. They also have to support the breadth and flexibility of language graduate career paths. This applies especially to training in the use of language and translation technologies: the applications that are currently in use will

52

The Bloomsbury Companion to Language Industry Studies

probably look quite different in a few years’ time when today’s students graduate. Accordingly, it does not make sense to fill study programmes with low-level user training. Instead, it is more effective to teach processes rather than tools per se, to introduce students to technological possibilities and limitations, give them some hands-on experience of selected software programs and then concentrate on the critical discussion of the advantages and disadvantages of the different technological aids. Students need to be made aware in particular of the consequences of current and potential future service provision models in order for them to understand the kinds of positions that are available in the field of transcultural communication and translation. A regular discourse and stronger links between academia and industry could help to promote a mutual understanding of their respective activities and establish mutual trust. In a best-case scenario, this could lead to the joint development of study concepts that benefit both research and industry and satisfy each side’s needs. A good example of such cooperation can be found in Finland, where joint workshops for practitioners and researchers were carried out to develop an interactive database of teaching methods (Kuusi, Rautiainen and Ruokonen 2016) available to translation studies departments at all universities across the country. The teaching units developed – such as ‘handling challenging client relationships’ – help lecturers and researchers to integrate real-world scenarios into their critical didactic and academic work. Workplace research enhances an academic institution’s knowledge of current practices and trends and enables it to take them into account in their teaching programmes, in our case translator training. In this way, they are better equipped to produce well-educated, well-informed graduates who are able to deal with the challenges they will be confronted with in their future work contexts and thus to support industry in its recruitment processes.

4.  Informing industry through research The findings gained through workplace studies can benefit industry in many ways. They can serve as a valuable source of information on best practices and thus contribute to process and quality management. Furthermore, by making best practices visible and describing them in detail, workplace research benefits not only practitioners by offering suggestions for improvements but also future

Researching Workplaces

53

experts, comprised of students and people who are interested in the industry, by giving them an insight into praxis and revealing future perspectives which might help them in their choice of career path. Participating in studies that deal with their own workplaces provides translators, interpreters and agencies with a new perspective on their work realities as a result of their conversations and interviews with researchers, and ultimately also the publications that discuss their work practices. This might, in turn, prompt them to reflect differently on their own status quo, processes, networks, client and employer satisfaction. Including and investigating the actors responsible for different aspects and sub-processes (e.g. clients, reviewers, publishers and project managers) provides translators and interpreters with a more complete picture of the networks that they form part of, yet might not even be aware of, for example, with regard to sub-processes and decisions downstream from the translation or interpreting process. This also applies with regard to the expectations of the other actors involved in the process, which otherwise might only be communicated to them in ‘filtered form’. Workplace research can have a strong applied dimension, with some research traditions, such as ergonomics research, working in particular towards adapting working processes, tools and environments to better suit the people who use them. Insights into how translators really use their software, what research strategies they apply or how they organize their different tasks throughout the working day might point software developers, translation project managers, translation agency managers or the translators themselves towards important possibilities for improvement – not just in terms of economic efficiency but also in the sense of giving greater consideration to the needs of the people who do the work. Of high relevance in this context is research into the physical and psychological stress faced by translators, interpreters and other actors at the workplace. Verified data from empirical studies can contribute to raising awareness of those aspects of specific work realities that can have a damaging impact on health, to providing information on preventive or health-promoting measures and, not least, to ensuring that this knowledge is incorporated into education programmes. Nevertheless, for this to happen, resources will have to be made available by both academia and the industry. On the one hand, participants have to allocate time to get familiar with the project and to participate. Researchers, on the other hand, have to adapt to the participants’ availability. Financial, organizational and infrastructure constraints might turn out as further roadblocks that have to be overcome.

54

The Bloomsbury Companion to Language Industry Studies

5.  Concluding remarks In this chapter, we have described the focal points in workplace research on translation and interpreting. We have discussed its cognitive, sociological and ergonomic theoretical frameworks, research methodology developments and areas of application, and sketched the mutual benefits of workplace research for industry and academia. We have argued that the most appropriate methods for the study of workplaces include ethnographic field observations and interviews that analyse how translators, interpreters and other relevant actors act and interact in specific spaces and with specific artefacts. Since this research tradition has only started to gain ground in translation studies, the description can only really mark the beginning of a new, potentially vast and multifaceted research agenda. Many blank areas on the T/I workplace research map remain to be studied: from the sub-processes that take place before and after translation and interpreting tasks are performed, the effects of different organizational cultures or technological innovations and occupational health and safety, to the longterm development of co-located and distributed translation networks, to name but a few.

References Abdallah, K. (2011), ‘Quality Problems in AVT Production Networks: Reconstructing an Actor-network in the Subtitling Industry’, in A. Şerban, A. Matamala and J. M. Lavaur (eds), Audiovisual Translation in Close-Up: Practical and Theoretical Approaches, 173–86, Bern: Peter Lang. Abdallah, K. (2012), ‘Translators in Production Networks: Reflections on Agency, Quality and Ethics’, PhD diss., University of Eastern Finland, Joensuu. Abdallah, K. (2014), ‘The Interface between Bourdieu’s Habitus and Latour’s Agency: The Work Trajectories of Two Finnish Translators’, in G. M. Vorderobermeier (ed.), Remapping Habitus in Translation Studies, 111–32, Amsterdam: Rodopi. AIIC (2002), ‘Interpreter Workload Study – Full Report’. Available online: http://aiic. net/p/657 (accessed 7 July 2017). Alonso, E. (2016), ‘Conflict, Opacity and Mistrust in the Digital Management of Professional Translation Projects’, Translation & Interpreting, 8 (1): 19–29. Available online: http://www.trans-int.org/index.php/transint/article/view/497 (accessed 12 July 2017). Angelelli, C. V. (2004), Revisiting the Interpreter’s Role. A Study of Conference, Court, and Medical Interpreters in Canada, Mexico, and the United States, Amsterdam/ Philadelphia: John Benjamins.

Researching Workplaces

55

Angelone, E. and A. Marín García (2017), ‘Expertise Acquisition through Deliberate Practice. Gauging Perceptions and Behaviors of Translators and Project Managers’, Translation Spaces, 6 (1): 123–59. Berber-Irabien, D. (2010), ‘Information and Communication Technologies in Conference Interpreting’, PhD diss., Universitat Rovira i Virgili, Tarragona. Bergmann, J. (2005), ‘Studies of Work’, in F. Rauner (ed.), Handbuch Berufsbildungsforschung, 639–46, Bielefeld: W. Bertelsmann. Bontempo, K. and K. Malcolm (2012), ‘An Ounce of Prevention Is Worth a Pound of Cure: Educating Interpreters about the Risk of Vicarious Trauma in Healthcare Settings’, in L. Swabey and K. Malcolm (eds), in Our Hands: Educating Healthcare Interpreters, 105–30, Washington: Gallaudet University Press. Bower, K. (2015), ‘Stress and Burnout in Video Relay Service (VRS) Interpreting’, Journal of Interpretation, 24 (1): 1–16. Brunette, L. and S. O’Brien (2011), ‘Quelle ergonomie pour la pratique postéditrice des textes traduits?’, ILCEA, 14. Available online: http://ilcea.revues.org/1081 (accessed 1 March 2017). Bundgaard, K. (2017), ‘(Post-)Editing – A Workplace Study of Translator-Computer Interaction at TextMinded Danmark A/S’, PhD diss., Department of Management, University of Aarhus, Aarhus. Bundgaard, K., T. Paulsen Christensen and A. Schjoldager (2016), ‘TranslatorComputer Interaction in Action – An Observational Process Study of ComputerAided Translation’, Journal of Specialised Translation, 25: 106–30. Available online: http://www.jostrans.org/issue25/art_bundgaard.php (accessed 7 March 2017). Button, G. and R. Harper (1996), ‘The Relevance of “Work Practice” for Design’, Computer Supported Cooperative Work, 4: 263–80. Buzelin, H. (2004), ‘La traductologie, l’ethnographie et la production des connaissances’, Meta, 49 (4): 729–46. Buzelin, H. (2005), ‘Unexpected Allies: How Latour’s Network Theory Could Complement Bourdieusian Analyses in Translation Studies’, The Translator, 11 (2): 193–218. Buzelin, H. (2006), ‘Independent Publisher in the Network of Translation’, TTR, 19 (1) (Figures du Traducteur/Figures du Traduire I/Figures of Translators/Figures of Translation I): 135–73. Buzelin, H. (2007), ‘Translations “in the Making”’, in M. Wolf and A. Fukari (eds), Constructing a Sociology of Translation, 135–69, Amsterdam: John Benjamins. Cadwell, P., S. Castilho, S. O’Brien and L. Mitchell (2016), ‘Human Factors in Machine Translation and Post-Editing among Institutional Translators’, Translation Spaces, 5 (2): 222–43. Chesterman, A. (2009), ‘The Name and Nature of Translator Studies’, Hermes, 42: 13–22. Christensen, T. Paulsen and A. Schjoldager (2016), ‘Computer-Aided Translation Tools – the Uptake and Use by Danish Translation Service Providers’, Journal of Specialised Translation, 25: 89–105. Available online: http://www.jostrans.org/ issue25/art_christensen.php (accessed 7 March 2017).

56

The Bloomsbury Companion to Language Industry Studies

Clancey, William J. (2006) ‘Observation of Work Practices in Natural Settings’, in A. Ericsson, N. Charness, P. Feltovich, and R. Hoffmann (eds), Cambridge Handbook on Expertise and Expert Performance, 127–45, New York: Cambridge University Press. Clark, A. (2003), Natural-born Cyborgs: Minds, Technologies, and the Future of Human Intelligence, New York: Oxford University Press. Clark, A. (2008), Supersizing the Mind: Embodiment, Action, and Cognitive Extension, Oxford: Oxford University Press. Clark, A. and D. J. Chalmers (1998), ‘The Extended Mind’, Analysis, 58 (1): 7–19. Dickinson, J. (2017), Signed Language Interpreting in the Workplace, Washington, DC: Gallaudet University Press. Dong, J. and J. Napier (2016), ‘Towards the Construction of Organisational Professionalism in Public Service Interpreting’, CTIS Occasional Papers, 7: 22–42. Available online: http://hummedia.manchester.ac.uk/schools/salc/centres/ctis/ publications/occasional-papers/Dong-and-Napier.pdf (accessed 12 July 2017). Dong, J. and G. H. Turner (2016), ‘The Ergonomic Impact of Agencies in the Dynamic System of Interpreting Provision. An Ethnographic Study of Backstage Influences on Interpreter Performance’, Translation Spaces, 5 (1): 97–123. Ehrensberger-Dow, M. (2014), ‘Challenges of Translation Process Research at the Workplace’, Mon TI, 7 (2): 355–83. Ehrensberger-Dow, M. and A. Hunziker Heeb (2016), ‘Investigating the Ergonomics of the Technologized Translation Workplace’, in R. Muñoz Martín (ed), Reembedding Translation Process Research, 69–88, Amsterdam: John Benjamins. Ehrensberger-Dow, M., A. Hunziker Heeb, G. Massey, U. Meidert, S. Neumann and H. Becker (2016), ‘An International Survey of the Ergonomics of Professional Translation’, ILCEA, 27. Available online: http://ilcea.revues.org/4004 (accessed 7 July 2017). Ehrensberger-Dow, M. and G. Massey (2017), ‘Socio-technical Issues in Professional Translation Practice’, Translation Spaces, 6 (1): 104–22. ELIA, EUATC and GALA (2016), ‘Language Industry Survey – Expectations and Concerns of the European Language Industry 2016’. Available online: https:// ec.europa.eu/info/sites/info/files/2016_survey_en.pdf (accessed 7 July 2017). ELIA, EUATC and GALA (2017), ‘Language Industry Survey – Expectations and Concerns of the European Language Industry 2017’. Available online: https:// ec.europa.eu/info/sites/info/files/2017_language_industry_survey_report_en.pdf (accessed 12 July 2017). Estellés, A. and E. Monzó Nebot (2015), ‘The Catcher in the CAT. Playfulness and Self-Determination in the Use of CAT Tools by Professional Translators’, in AsLing (ed.), Proceedings of the 37th Conference Translating and the Computer, London, UK, November 26–27, 2015, 66–78, Geneva: Editions Tradulex. Flick, U. (2011), Triangulation. Eine Einführung, 3rd edn, Wiesbaden: VS Verlag für Sozialwissenschaften.

Researching Workplaces

57

Flynn, P. (2004), ‘Skopos Theory: An Ethnographic Enquiry’, Perspectives: Studies in Translatology, 12 (4): 270–85. Flynn, P. (2010), ‘Ethnographic Approaches’, in Y. Gambier and L. van Doorsler (eds), Handbook of Translations Studies, 116–19, Amsterdam/Philadelphia: John Benjamins. Gardner, H. (1987), The Mind´s New Science, New York: Basic Books. Gaspari, F., H. Almaghout and S. Doherty (2015), ‘A Survey of Machine Translation Competences. Insights for Translation Technology Educators and Practitioners’, Perspectives: Studies in Translatology 23 (3): 333–58. Gouadec, D. (2007), Translation as a Profession, Amsterdam: John Benjamins. Gough, J. (2011), ‘An Empirical Study of Professional Translators’ Attitudes, Use and Awareness of Web 2.0 Technologies and Implications for the Adoption of Emerging Technologies and Trends’, Linguistica Antverpiensia, 10: 195–217. Grass, T. (2011), ‘“Plus” est-il synonyme de “mieux”? Logiciels commerciaux contre logiciels libres du point de vue de l’ergonomie’, ILCEA, 14. Available online: http:// ilcea.revues.org/index1052.html (accessed 7 July 2017). Hanna, S. (2016), Bourdieu in Translation Studies: The Socio-Cultural Dynamics of Shakespeare Translation in Egypt, New York: Routledge. Hébert-Malloch, L. (2004), ‘What Do We Know about a Translator’s Day?’, Meta, 49 (4): 973–79. Hokkanen, S. (2016), ‘To Serve and to Experience: An Autoethnographic Study of Simultaneous Church Interpreting’, PhD diss., University of Tampere, Tampere. Holz-Mänttäri, J. (1984), Translatorisches Handeln. Theorie und Methode, Helsinki: Annales Academiae Scientarum Fennicae. Hubscher-Davidson, S. (2016), ‘Trait Emotional Intelligence and Translation. A Study of Professional Translators’, Target, 28 (1): 132–57. Hurd, E. (2010), ‘Confessions of Belonging: My Emotional Journey as a Medical Translator’, Qualitative Inquiry, 16 (10): 783–91. Hutchins, E. (2010), ‘Cognitive Ecology’, Topics in Cognitive Science, 2 (4): 705–15. Karamanis, N., S. Luz and G. Doherty (2011), ‘Translation Practice in the Workplace. Contextual Analysis and Implications for Machine Translation’, Machine Translation, 25 (1): 35–52. Kinnunen, T. (2010), ‘Agency, Activity and Court Interpreting’, in T. Kinnunen and K. Koskinen (eds), Translators’ Agency, 126–64 (Tampere Studies in Languages, Translation and Culture B4), Tampere: Tampere University Press. Kinnunen, T. (2013), ‘Translatorisches Handeln und die interprofessionale Kooperation im Kontext des Gerichtsdolmetschens in Finnland’, trans-kom, 6 (1): 70–91. Available online: http://www.trans-kom.eu/bd06nr01/trans-kom_06_01_04_ Kinnunen_Gericht.20130701.pdf (accessed 11 July 2017). Knoblauch, H. (2000), ‘Workplace Studies und Video: zur Entwicklung der visuellen Ethnographie von Technologie und Arbeit’, in I. Gotz and A. Wittel (eds), Arbeitskulturen im Umbruch: zur Ethnographie von Arbeit und Organisation, 159–74 (Münchner Beiträge zur Volkskunde 26), Munich: Waxmann.

58

The Bloomsbury Companion to Language Industry Studies

Knoblauch, H. and C. Heath (2006), ‘Die Workplace Studies’, in W. Rammert and C. Schubert (eds), Technografie. Zur Mikrosoziologie der Technik, 141–61, Frankfurt am Main: Campus. Kolb, W. (2017), ‘“It Was on my Mind all Day”: Literary Translators Working from Home – some Implications of Workplace Dynamics’, Translation Spaces, 6 (1): 27–43. Krüger, R. (2016), ‘Contextualising Computer-Assisted Translation Tools and Modelling Their Usability’, trans-kom, 9 (1): 114–48. Available online: http:// www.transkom.eu/bd09nr01/trans-kom_09_01_08_Krueger_CAT.20160705.pdf (accessed 3 January 2017). Kuckartz, U. (2014), Mixed Methods. Methodologie, Forschungsdesigns und Analyseverfahren, Wiesbaden: VS Verlag für Sozialwissenschaften. Kurz, I. (1983), ‘CO2 and CO Levels in Booths at the End of a Conference Day. A Pilot Study’, AIIC Bulletin, 11 (3): 86–93. Kurz, I. (1997), ‘Interpreters: Stress and Situation-Dependent Control of Anxiety’, in K. Klaudy and J. Kohn (eds), Transferre necesse est, 201–6, Budapest: Scholastica. Kurz, I. (2002), ‘Physiological Stress Responses during Media and Conference Interpreting’, in G. Garzone and M. Viezzi (eds), Interpreting in the 21st Century. Challenges and Opportunities, 195–202, Amsterdam: John Benjamins. Kurz, I. (2003), ‘Physiological Stress during Simultaneous Interpreting: A Comparison of Experts and Novices’, The Interpreters’ Newsletter, 12: 51–67. Kurz, I. and H. Kolmer (1984), ‘Humidity and Temperature Measurements in Booths’, AIIC Bulletin, 12 (2): 42–3. Kuusi, P., A. Rautiainen and M. Ruokonen (2016), ‘Vertaistuesta virtaa opetukseen. Kääntämisen ja tulkkauksen opetusmenetelmävaranto monimuotoyhteisönä’ [Powering Teaching through Peer Mentoring. Database for Teaching Methods in Translation and Interpreting as a Blended Community], in R. Hartama-Heinonen, M. Kivilehto and M. Ruokonen (eds), MikaEL – Electronic Journal of the KäTu Symposium on Translation and Interpreting Studies, 9: 33–52. Kuznik, A. (2016), ‘La traduction comme travail: perspectives croisées en ergonomie, sociologie et traductologie’, ILCEA, 27. Available online: http://ilcea.revues.org/4036 (accessed 7 March 2017). Kuznik, A. and J. M. Verd (2010), ‘Investigating Real Work Situations in Translation Agencies. Work Content and Its Components’, Hermes, 44: 25–43. Latour, B. (1987), Science in Action: How to Follow Scientists and Engineers through Society, Cambridge: Harvard University Press. Latour, B. (2005), Reassembling the Social: An Introduction to Actor Network Theory, Oxford: Oxford University Press. Lavault-Olléon, E. (2011a), ‘L’ergonomie, nouveau paradigme pour la traductologie’, ILCEA, 14. Available online: http://ilcea.revues.org/1078 (accessed 7 March 2017). Lavault-Olléon, E. (ed.) (2011b), ‘Traduction et Ergonomie’, ILCEA, 14, Special Issue. Available online: https://ilcea.revues.org/1031 (accessed 7 July 2017).

Researching Workplaces

59

Lavault-Olléon, E. (ed.) (2016), ‘Approches ergonomiques des pratiques professionnelles et des formations des traducteurs’, ILCEA, 27, Special Issue. Available online: https://ilcea.revues.org/3834 (accessed 12 July 2017). LeBlanc, M. (2013), ‘Translators on Translation Memory (TM). Results of an Ethnographic Study in Three Translation Services and Agencies’, Translation and Interpreting Studies, 5 (2): 1–13. LeBlanc, M. (2017), ‘“I Can’t Get No Satisfaction’”: Should We Blame Translation Technologies or Shifting Business Practices?’, in D. Kenny (ed), Human Issues in Translation Technologies, 45–62, London: Routledge. Liu, F.-M. C. (2013), ‘A Quantitative Enquiry into the Translator’s Job-related Happiness: Does Visibility Correlate with Happiness?’, Across Languages and Cultures, 14 (1): 123–48. Martikainen, H. and N. Kübler (2016), ‘Ergonomie cognitive de la post-édition de traduction automatique: enjeux pour la qualité des traductions’, ILCEA, 27. Available online: http://ilcea.revues.org/3863 (accessed 7 March 2017). Massey, G. and M. Ehrensberger-Dow (2011), ‘Technical and Instrumental Competence in the Translator’s Workplace: Using Process Research to Identify Educational and Ergonomic Needs’, ILCEA, 14. Available online: http://ilcea.revues.org/1033 (accessed 12 July 2017). Massmann, C. (1995), ‘Arbeitsbedingungen von GebärdensprachdolmetscherInnen und mögliche Folgen’, Das Zeichen, 9 (33): 335–44. McCartney, J. (2006), ‘Burnout of Sign Language Interpreters: A Comparative Study of K-12, Postsecondary, and Community Interpreters’, RID Journal of Interpretation, 2006: 83–108. McDonough, J. (2007), ‘How Do Language Professionals Organize Themselves? An Overview of Translation Networks’, Meta, 52 (4): 793–815. Meidert, U., S. Neumann, M. Ehrensberger-Dow and H. Becker (2016), ‘Physical Ergonomics at Translators’ Workplaces: Findings from Ergonomic Workplace Assessments and Interviews’, ILCEA, 27. Available online: https://ilcea.revues. org/3996 (accessed 7 July 2017). Menary, R., ed. (2010), The Extended Mind. Cambridge, MA: The MIT Press. Mihalache, I. (2008), ‘Community Experience and Expertise: Translators, Technologies and Electronic Networks of Practice’, Translation Studies, 1 (1): 55–72. Monzó Nebot, E. (2006), ‘Somos profesionales? Bases para una sociología de las profesiones aplicada a la traducción’, in A. Parada and O. Díaz Fouces (eds), Sociology of Translation, 155–76, Vigo: Servizo de Publicacións da Universidade de Vigo. Norström, E., I. Fioretos and K. Gustafsson (2012), ‘Working Conditions of Community Interpreters in Sweden: Opportunities and Shortcomings’, Interpreting, 14 (2): 242–60. O’Brien, S. (2012), ‘Translation as Human-Computer Interaction’, Translation Spaces, 1: 101–22.

60

The Bloomsbury Companion to Language Industry Studies

O’Brien, S., L. Winther Balling, M. Carl, M. Simard, and L. Specia, eds (2014), Post-editing of Machine Translation. Processes and Applications, Newcastle upon Tyne: Cambridge Scholars Publishing. Olohan, M. (2011), ‘Translators and Translation Technology: The Dance of Agency’, Translation Studies, 4 (3): 342–57. Olohan, M. (2017), ‘Knowing in Translation Practice. A Practice-theoretical Perspective’, Translation Spaces, 6 (1): 160–81. Olohan, M. and E. Davitti (2017), ‘Dynamics of Trusting in Translation Project Management: Leaps of Faith and Balancing Acts’, Journal of Contemporary Ethnography, 46 (4): 391–416. Peters-Geiben, L. (2016), ‘La prévention comportementale et contextuelle: intégrer une approche ergonomique dans la formation des traducteurs’, ILCEA, 27. Available online: http://ilcea.revues.org/4026 (accessed 12 March 2017). Pineau, M. (2011), ‘La main et le clavier: histoire d’un malentendu’, ILCEA, 14. Available online: http://ilcea.revues.org/1067 (accessed 10 July 2017). Pöchhacker, F. (2009), ‘Conference Interpreting. Surveying the Profession’, Translation and Interpreting Studies, 4 (2): 172–86. Pym, A. (2011), ‘What Technology Does to Translating’, Translation & Interpreting, 3 (1): 1–9. Pym, A., F. Grin, C. Sfreddo and A. L. J. Chan (2014), The Status of the Translation Profession in the European Union, London: Anthem Press. Rinsche, A. and N. Portera-Zanotti (2009), ‘The Size of the Language Industry in the EU. Study Report to the Directorate General for Translation of the European Union’. Available online: http://www.termcoord.eu/wp-content/uploads/2013/08/ Study_on_the_size_of_the_language_industry_in_the_EU.pdf (accessed 10 July 2017). Risku, H. (2014), ‘Translation Process Research as Interaction Research: From Mental to Socio-Cognitive Processes’, MonTI Special Issue – Minding Translation: 331–53. Risku, H. (2016), Translationsmanagement. Interkulturelle Fachkommunikation im Informationszeitalter, 3rd edn, Tübingen: Narr. Risku, H. (2017), ‘Ethnographies of Translation and Situated Cognition’, in J. W. Schwieter and A. Ferreira (eds), The Handbook of Translation and Cognition, 290–310, Oxford: Wiley-Blackwell. Risku, H. and A. Dickinson (2009), ‘Translators as Networkers: The Role of Virtual Communities’, Hermes, 42: 49–70. Risku, H., J. Milošević and R. Rogl (2017a), ‘Creativity in the Translation Workplace’, in L. Cercel, M. Agnetta and M. T. Amido Lozano (eds), Kreativität und Hermeneutik in der Translation, 455–69, Tübingen: Narr. Risku, H., R. Rogl and J. Milošević (2017b), ‘Translation Practice in the Field: Current Research on Socio-cognitive Processes’, Translation Spaces, 6 (1): 3–26.

Researching Workplaces

61

Risku, H., R. Rogl and C. Pein-Weber (2016), ‘Mutual Dependencies: Centrality in Translation Networks’, Journal of Specialised Translation, 25: 232–53. Available online: http://www.jostrans.org/issue25/art_risku.php (accessed 12 July 2017). Risku, H., N. Rossmanith, A. Reichelt and L. Zenk (2013), ‘Translation in the Network Economy. A Follow-Up Study’, in C. Way, S. Vandepitte, R. Meylaerts and M. Bartłomiejczyk (eds), Tracks and Treks in Translation Studies, 29–48, Amsterdam: John Benjamins. Rodríguez-Castro, M. (2013), ‘The Project Manager and Virtual Translation Teams: Critical Factors’, Translation Spaces, 2: 37–62. Rodríguez-Castro, M. (2015), ‘Conceptual Construct and Empirical Validation of a Multifaceted Instrument for Translator Satisfaction’, Translation & Interpreting, 7 (2): 30–50. Roziner, I. and M. Shlesinger (2010), ‘Much Ado about Something Remote. Stress and Performance in Remote Interpreting’, Interpreting, 12 (2): 214–47. Schwenke, T. J., J. S. Ashby and P. B. Gnilka (2014), ‘Sign Language Interpreters and Burnout: The Effects of Perfectionism, Perceived Stress, and Coping Resources’, Interpreting, 16 (2): 209–59. Snell-Hornby, M. (1988), Translation Studies: An Integrated Approach. Amsterdam: John Benjamins. Steve, M. and L. Siple (1980), ‘The Interpreter as the Stage Manager: Lighting’, in F. Caccamise, R. Dirst, R. D. Devries, J. Heil, C. J. Kirchner, S. Kirchner, A. M. Rinaldi and J. Stangarone (eds), Introduction to Interpreting. For Interpreters/ Transliterators, Hearing Impaired Consumers, Hearing Consumers, 125–7, Silver Spring: RID. Stoeller, W. (2011), ‘Global Virtual Teams’, in K. J. Dunne and E. S. Dunne (eds), Translation and Localization Project Management. The Art of the Possible, 289–317, Philadelphia: John Benjamins. Teixeira, C. and S. O’Brien (2017), ‘Investigating the Cognitive Ergonomic Aspects of Translation Tools in a Workplace Setting’, Translation Spaces, 6 (1): 79–103. Toudic, D. and G. de Brébisson (2011), ‘Poste du travail du traducteur et responsabilité: une question de perspective’, ILCEA, 14. Available online: http://ilcea.revues. org/1043 (accessed 12 July 2017). Tyulenev, S. (2015), ‘Towards Theorizing Translation as an Occupation’, Asia Pacific Translation and Intercultural Studies, 2 (1): 15–29. Valero Garcés, C. (2015), ‘The Impact of Emotional and Psychological Factors on Public Service Interpreters: Preliminary Studies’, Translation & Interpreting, 7 (3): 90–102. Vermeer, H. J. (1989), ‘Skopos and Commission in Translational Action’, trans. A. Chesterman, in A. Chesterman (ed.), Readings in Translation Theory, 173–87, Loimaa: Finn Lectura.

62

The Bloomsbury Companion to Language Industry Studies

Vorderobermeier, G. M. (2013), Translatorische Praktiken aus soziologischer Sicht. Kontextabhängigkeit des übersetzerischen Habitus?, Opladen: Budrich UniPress. Vorderobermeier, G. M., ed. (2014), Remapping Habitus in Translation Studies, Amsterdam: Rodopi. Williams, S. (1995), ‘Observations on Anomalous Stress in Interpreting’, The Translator, 1 (1): 47–64.

4

Translators’ roles and responsibilities Christina Schäffner

1. Introduction Translation plays a significant role for the exchange of knowledge, goods and ideas around the world. In order to cope with the increasing demand, a whole language industry has developed which is characterized by constant growth and complexity. The Global Market Surveys published by the market research company Common Sense Advisory (CSA) have reported annual growth rates between 5 and 10 per cent, thus illustrating the industry’s viability. The 2017 report on the Language Services Market noted that the sector had experienced a tumultuous year, partly linked to the fundamental changes in the global political landscape in 2016 (DePalma et al. 2017). The market was nevertheless expected to have revenues of US$ 43.08 billion in 2017, a rise of 6.97 per cent over the last year (DePalma 2017). The latest figures report a revenue of US$ 46.52 in 2018, with a growth of the language industry as a whole of 7.99 per cent in 2017 (DePalma, Pielmeier and Stewart 2018). The language industry is very complex and includes more than translation as its services. The CSA market reports list translation together with services such as internationalization, interpreting, localization, transcreation, translation technologies and web globalization. The 2014 report identified transcreation, web globalization, internationalization and telephone interpreting as services which had seen the highest growth rates in the previous four years (DePalma et al. 2014). A survey conducted for the UK in 2015 identified software, e-commerce and e-learning as the customer groups with the fastest growing demand. The 2016 Translation Technology Landscape Report by the independent translation industry organization Translation Automation User Society (TAUS) included datafication of translation, neural machine translation and deep learning, and speech-to-speech translation among the major trends (Massardo, van der Meer and Khalilov 2016). These trends also indicate that the language industry is changing rapidly.

64

The Bloomsbury Companion to Language Industry Studies

Translators and interpreters are often self-employed as language professionals and work as freelancers, either for translation companies or for their own clients. Their total number is difficult to determine, though, particularly since the profession is not regulated. Membership in professional associations can give some indication of the number of translators. For example, the American Translators Association (ATA 2017) reports having more than 10,000 members in 90 countries and the UK’s Institute of Translation and Interpreting (ITI 2017) over 3,000 members. The German association (Bundesverband der Dolmetscher und Übersetzer e.V.; BDÜ 2017) has more than 7,500 members, but also mentions that a census conducted by the German Federal Statistical Office in 2013 reported a total of 41,000 translators and interpreters in Germany, 22,000 of whom were working freelance. Moreover, members in professional associations work not only as translators and interpreters but also as teachers, project managers or software developers. Membership in an association is a sign of professional language expertise since translators and interpreters are only admitted if they fulfil the association’s admission criteria. It was a major achievement for the profession that the United Nations adopted Resolution A/RES/71/288 in May 2017, which recognizes the role of professional translation in connecting nations, and fostering peace, understanding and development (UN 2017). This chapter focuses on roles and responsibilities in translation, which itself is complex in terms of the settings in which translators work, their status, the material (e.g. text files and digital material), subject domains and tools they work with, and the interaction with other agents (for roles and responsibilities of interpreters, see Albl-Mikasa, this volume). The diversity of the activities has even led to new labels being used to characterize translation in specific contexts, such as localization for websites, adaptation for film or theatre, transcreation for advertising and transediting for news. This chapter illustrates how roles and responsibilities of professional translators have been addressed in academic research and by the profession itself. Professional translation is understood here as a paid occupation which requires a formal qualification. The chapter concludes with some reflection on how the industry and academia can work together in order to make further progress in this area.

2.  Research focal points Professionals and academics often share interests and opinions, but also focus on different aspects and/or approach roles and responsibilities of translators

Translators’ Roles and Responsibilities

65

from different perspectives. In this section, the perspective of the profession is illustrated first, followed by that of the discipline of translation studies.

2.1.  Reflections on roles and responsibilities within the profession From the professional perspective, aspects of translation have frequently been addressed in books, articles or reports aimed at professionals in the language industry themselves, including future professionals. Some material, such as leaflets produced by professional associations, explain what translation is and/ or what a translator does (e.g. ITI 2014) and are intended for clients, buyers of translation and the general public to raise awareness of the job. Translation has multiple faces. This is reflected in labels which specify the subject domain (e.g. specialized translation, which can be specified even further, such as scientific, technical, economic, financial and legal), the mode (e.g. audiovisual translation) or the purpose (e.g. gist translation). Professional associations acknowledge these special domains and also mention that for some of them, the tasks require additional skills. For example, subtitling is a type of audiovisual translation in which translators not only transfer the spoken word into written text but also work with specific software to place the text directly onto the screen. Many of them are also increasingly likely to perform more complex tasks such as encoding audiovisual content, converting it into different formats, and creating files with both audiovisual content and subtitles (see DíazCintas this volume). The roles and responsibilities of translators have much in common, independent of their specializations. These are also described in professional codes of conduct, albeit in a rather generalized way. Such codes lay down principles and guidelines for exercising the profession and for the behaviour towards clients, colleagues and other professionals. For example, the ITI Code (2016: 5) lists (a) honesty and integrity, (b) professional competence, (c) client confidentiality and trust, and (d) relationships with other members as four key principles of practice. The first of these principles, honesty and integrity, includes responsibilities such as being honest, fair, impartial and truthful, and not advertising expertise or resources beyond those that can be provided. In addition to the key principles, there is a list of eight professional values. The first one requires members to ‘convey the meaning between people and cultures faithfully, accurately and impartially’ (ITI 2016: 4). This aspect of how to work with the source text is also addressed in the principle of professional competence which says: ‘3.2 Subject to Principle 2 Clause 8 below, members shall at all times maintain the highest standards of work according to their abilities, ensuring

66

The Bloomsbury Companion to Language Industry Studies

fidelity of meaning and register, unless specifically instructed by their clients, preferably in writing, to recreate the text in the cultural context of the target language’ (ITI 2016: 7). The aforementioned Clause 8, headed ‘Exceptions’, specifies provisions for carrying out work that contravene this key principle, above all ensuring the client’s awareness of the risks involved (ITI 2016: 10). References to ‘faithfulness’ and ‘fidelity’ in professional codes reveal a rather narrow understanding of translation as meaning transfer, which was dominant in the theories of translation in the 1960s and 1970s (see 2.2). Apart from this, however, these codes give a good overview of the broader range of professional responsibilities that go beyond the specific activity of translating. In this respect, ethical matters are addressed as well, such as the need to act fairly and ethically in business relationships with suppliers, subcontractors or customers, not accepting work believed to further illegal or criminal activities, and treating information from and about clients confidentially. Similar provisions can be seen in the Codes of Conduct of the UK’s Chartered Institute of Linguists (CIoL) and the German BDÜ. It needs to be acknowledged, however, that some professional associations do not see faithfulness in a narrow sense and endorse the importance of intercultural mediation. Katan (2016: 370) refers to a study by Moscara (2013) who analysed codes of forty professional organizations worldwide and found that they closely follow either the guidelines issued by the International Federation of Translators (FIT) or those of the ATA. Whereas the ATA code (2010) requires faithfulness in conveying the message as intended by the author, the understanding of a faithful translation in the FIT Charter (1994) includes adaptation to account for cultural differences. Such discrepancies in the wordings of codes also highlight the need for their regular revision to keep up with the professional reality (a review of the FIT Translator’s Charter is suggested by Katan and Liu 2017). Roles and responsibilities of translators may differ somewhat if we compare those who are employed (e.g. working in translation departments of large companies, governments or international organizations) with those working on their own account (i.e. freelancers). Differences can be related to access to tools and experts, workflow processes and income. Translators working for large institutions or companies can benefit from copious reference material (e.g. huge databases of previously translated documents, glossaries and terminology departments) and easier access to other experts (e.g. software developers). Such institutions and companies also maintain translation memory (TM) and terminology management systems. The workflow process is thus characterized by teamwork and regulated quality assurance mechanisms.

Translators’ Roles and Responsibilities

67

There are also differences in respect of tasks related to the business side of the profession. It is in particular freelance translators who also have to market themselves, find new clients, manage budgets, complete tax return forms, have professional indemnity insurance, upgrade translation tools (hardware and software) and so on. In short, they need business skills. An online survey conducted in 2015/16 investigated to what extent marketing is part of the job of a freelance translator (Marketing survey 2017). Of the 117 respondents, 78 (66.67 per cent) confirmed that marketing is an ongoing activity for themselves, but 80 respondents (68.38 per cent) agreed with the statement that marketing should be a background activity for a freelance translator. Similarly, translation agencies wishing to expand their translator database expect translators to have their own website and participate actively in translation networks, as revealed in a study by Risku et al. (2013). For employed translators, tasks such as marketing and budgeting are covered by their employer. Professional associations often provide support for their members in respect of business and marketing, and also professionals themselves share information. For example, Durban (2010) provides lots of advice to freelance translators on how to prosper in the business. Based on her own experience as a successful freelance translator, she addresses topics such as client relations, pricing, marketing, specializing and ethics. Bulletins issued by professional associations include articles written by translators for other translators, sharing experience and concerns in respect of translation-specific issues (e.g. terminology, CAT tools), continuous professional development, the translators’ (professional and ethical) responsibilities towards their clients, professional peers and other agents in the translation process (e.g. project managers, revisers, proofreaders). For example, the ITI Bulletin of May– June 2016 included articles offering advice to translators on how to get the most out of working with agencies, on using automatic speech recognition software, and on implications of the new international standard ISO 17100 (2015) for freelancers. Communities and networks of translators (e.g. ProZ1) are also platforms for discussing aspects of roles and responsibilities. Among topics addressed on the ProZ discussion forums in 2017 were the following: quoting for editing; applying TM discounts to revision work; dealing with unjustified criticism of translations done by other translators; working on projects where the project manager has a different CAT tool than the translator; merging, translating and splitting Translation Workspace files; making oneself noticed; implications of moving to another country for business; and dealing with unresponsive clients (ProZ Forum 2017). Such topics reflect both responsibilities in the narrower

68

The Bloomsbury Companion to Language Industry Studies

sense of translation as text production and in the wider sense of translation as an increasingly technologized profession. Professional roles and responsibilities require skills and competences. Both the 2016 ITI Code and the 2017 CIoL Code list professional competence, which in the ITI Code subsumes linguistic and subject competence. Competence is also a topic which has been addressed widely in the discipline of translation studies, as discussed in the next section.

2.2.  Reflections on roles and responsibilities within translation studies 2.2.1.  Research on translation competence The translation industry and the academic discipline of translation studies share an interest in raising the status of translation and of translators. In order to ensure the future of the profession, it is essential to prepare students for their professional roles and responsibilities in their prospective careers (e.g. Gouadec 2007; Kadrić and Kaindl 2016). An important issue in this respect is the notion of translation competence (see also Hurtado Albir 2010) and its development, for which various models have been devised and tested in training contexts (e.g. PACTE 2009; Göpferich 2009; Kiraly 2000, 2013). Translation competence refers to the knowledge and skills translators need to have in order to function in a professional manner. It has often been described as complex and consisting of sub-competences. Although the number of such sub-competences and their labels differ, the models have much in common. Göpferich (2009), for example, lists communicative competence in at least two languages, domain competence, tools and research competence, psycho-motor competence, strategic competence and translation routine activation competence. The multicomponential translation competence model of the PACTE group (Process of Acquisition of Translation Competence and Evaluation), which has been slightly modified in the course of time (e.g. PACTE 2003, 2011), includes bilingual sub-competence, extra-linguistic sub-competence, knowledge about translation sub-competence, instrumental sub-competence, strategic subcompetence and psychophysiological components. Kiraly (2006) prefers the label translator competence and, based on a social constructivist approach to translation pedagogy, presents a professional translator’s super-competence comprising three bundles of sub-competences which are closely interrelated: social competences, personal competences and translation competence per se. The social competences include etiquette, negotiation and teamwork, and the

Translators’ Roles and Responsibilities

69

personal competences include autonomy, preparedness for lifelong learning, quality control and professional responsibility. The translation competence covers competence in languages, cultures, text-typology, norms and conventions, terminology, world knowledge, strategies, technology and research. In his more recent research, Kiraly (2013) conceptualizes translator competence as an emerging phenomenon, building on complexity theory and cognitive science. This emergent process, graphically illustrated with swirling, interactive vortices, is co-determined by the tasks translators are engaged in, their personal and interpersonal disposition for translating, the resources available and so on. Competence is thus not acquired, with individual components potentially built up at different stages, but it ‘creates itself through the translator’s embodied involvement (habitus) in actual translation experiences’ (Kiraly 2013: 203). Despite such developments in research, the more static componential models of translator competence are still prominent. A recent example is the second translator competence profile developed for the European Master’s in Translation (EMT) project, designed as a reference framework for learning outcomes for postgraduate training programmes. Competence here is understood in accordance with the European Qualifications Framework as the ‘proven ability to use knowledge, skills and personal, social and/or methodological abilities, in work or study situations and in professional and personal development’ (EMT Competence Framework 2017: 3). The initial EMT wheel of competences of 2009 has now been replaced by a new framework which has taken into account changes in the language industry. The new 2017 framework defines five complementary areas of competence: language and culture (transcultural and sociolinguistic awareness and communicative skills), translation (strategic, methodological and thematic competence), technology (tools and applications), personal and interpersonal competence, and service provision2. Each of these five areas is specified further into more detailed aspects of procedural knowledge and skills. Such competence models reflect a wider understanding of the roles and responsibilities of translators. In particular the inclusion of personal and interpersonal competences and of service provision indicate the significance of a translator’s skills in managing communication with various agents in the process and to operate in a commercial environment. For example, the EMT’s service provision competence includes aspects such as monitoring new societal and language industry demands; new market requirements and emerging job profiles; negotiating with the client; organizing, budgeting and managing translation projects; understanding and implementing the standards applicable

70

The Bloomsbury Companion to Language Industry Studies

to the provision of a language service; and complying with professional ethical codes and standards. Interpersonal, organizational and entrepreneurial skills are considered as important as the skills required to produce a translation, which include skills such as analysing a source document; implementing the instructions, style guides or conventions relevant to a particular translation; analysing and justifying translation solutions and choices; pre-editing source material for the purpose of potentially improving MT output quality; applying post-editing to MT output using the appropriate post-editing levels and techniques according to the quality and productivity objectives; and recognizing the importance of data ownership and data security issues (EMT Competence Framework 2017: 8). The points about pre- and post-editing in particular reflect the changes in the working modalities of translators brought about by the digital revolution. There is nowadays widespread agreement that translator training programmes should be professionally oriented and guided by the needs of the translation industry. Such competence models have therefore also been used to inform the curriculum and syllabus development of translator training programmes, as well as teaching methods (see Hubscher-Davidson and Borodo 2012; and C. Way, this volume). There is rich documentation by the PACTE group illustrating how specific competence-oriented teaching and learning methods, including experiments, contributed to competence development (e.g. Hurtado Albir 2017). Task-based activities and authentic projects which require teamwork, strategic decision-making and critical reflection have been identified as valuable tools (e.g. Göpferich 2012; and Kiraly 2013 on case studies to investigate the emergence of translator competence in an actual classroom setting).

2.2.2.  Changing perceptions of translation and translators in translation theories Research into roles and responsibilities of translators is also related to the definition of translation, which has undergone a change in the course of time (see also Jääskeläinen 2007). The more traditional linguistics-based theories defined translation as meaning transfer from a source text to a target text. The translators’ responsibility was seen as a faithful reproduction of the meaning of the source text, thus reducing their role to that of an invisible transcoder of meanings. Knowledge of the languages and cultures, of genre conventions, and competent use of research tools (in the 1960s and 1970s mainly limited to dictionaries and similar reference material) were seen as essential. Other points addressed in (text-)linguistic approaches were translators’ responsibilities in

Translators’ Roles and Responsibilities

71

respect of their area(s) of specialization and quality assurance. Responsibilities such as negotiating with clients or project managers, marketing and accounting, which are beyond translation in the narrower sense of text (re-)production, were not yet topics of research. With the development of functionalist approaches to translation in the late 1970s and early 1980s, the role of translators was identified from a wider perspective. Translation was defined as a purposeful activity (e.g. Vermeer 1987), and translators were seen as professional experts in their own right, as experts in text design for transcultural interaction. In particular Holz-Mänttäri’s (1984) theory of translatorial action stressed the responsibility of the translator for interacting with the commissioner, the client and other experts (including specification of the skopos, negotiating deadlines, access to material, etc.). Functionalists reject the equivalence-based linguistic theories’ insistence on faithfulness to the meaning of the source text and emphasize the translator’s responsibility to produce a target text appropriate to its specified purpose or skopos. The notion of faithfulness is replaced by the concept of loyalty as denoting an interpersonal relationship (Nord 1997). The translator thus is expected to be loyal to the client or commissioner, to the ultimate addressees of the target text and to the source text author. Research in this area has mainly been concerned with didactic implications, focusing on how a functionalist approach to a translation task can empower translators in their role as professional experts. Norms- and system-based theories focused more on the sociocultural contexts in which translations are produced and received and in which translators operate. Although they did not address professional aspects in the same way as functionalists did, they highlighted that translation is socially contextualized behaviour determined by sociocultural constraints. Toury (1995) introduced norms as one type of such constraints, with other research also investigating the position of translated literature in a cultural system and the influence of regulatory bodies on the status of translations and translators (e.g. Lefevere 1992). Toury termed this translatorship, which he characterized as follows: ‘“translatorship” amounts first and foremost to being able to play a social role, i.e., to fulfil a function allotted by a community – to the activity, its practitioners and/or their products – in a way which is deemed appropriate in its own terms of reference’ (Toury 1995: 53). Despite a large body of empirical studies of actual translations, however, norm-based theories and descriptive translation studies (DTS) did not sufficiently investigate what a translator actually does. It has been argued that the view of translators’ behaviour being determined by sociocultural constraints

72

The Bloomsbury Companion to Language Industry Studies

leads to an understanding of translators as being subservient and does not allow for an analysis of their potentially active role (e.g. Simeoni 1998). With the ‘cultural turn’ (Bassnett and Lefevere 1990) in the 1990s, the role of translators as active agents in the construction of cultures and as agents of social change moved towards the centre of translation studies. More recent approaches to translation build on cultural studies and/or on sociology and define translation as a cultural–political practice (e.g. Venuti 1995) or as a socially regulated activity (e.g. Wolf and Fukardi 2007), respectively. Approaches inspired by cultural studies (e.g. postcolonial, feminist theories) highlight the powerful role of translators in creating knowledge and shaping cultures. The understanding of the translator’s empowerment goes beyond the professional competence addressed by functionalists. Instead of expecting translators to produce a text that is appropriate for the client’s purpose, these postmodern approaches see translators as visible and engaged interventionists. This view includes an expectation that they make this intervention visible both in the text, for instance, by opting for a foreignization strategy (Venuti 1995) or by subverting male-dominated language (e.g. von Flotow 1997), and in the paratext, for instance, by adding translator’s notes. Research has investigated the social contexts in which translation takes place, identifying the power relations between languages, cultures and the agents involved (e.g. Gentzler and Tymoczko 2002). What has become clear is that the more traditional view of translators as neutral mediators is not doing justice to their reality and that both the enforced submission of translators to the power of their commissioners and their own individual commitment to a particular cause (e.g. feminism) can influence the way translation is performed. In this respect, and also in view of research into activist translation and translation in conflict situations (e.g. Baker 2006; Boéri and Maier 2010; Inghilleri and Harding 2010), the question of ethics has come to the forefront again. Since translatorial decisions (e.g. in the choice of words, the choice of text, or in the acceptance or rejection of a commission) can have wider cultural and ideological implications, the questions arise as to where the limits of the translator’s responsibilities are and whether commitment to a political cause is part of a translator’s responsibilities. Chesterman (2001: 147) sees an ethics of commitment, despite its moral value, as outside the professional realm and argues that ‘[p]rofessional ethics […] govern a translator’s activities qua translator, not qua political activist or life-saver’. Yoo and Jeong (2017), however, argue that voluntary services outside of and within the profession (citizenship behaviours) positively affect the translators’ professional identity.

Translators’ Roles and Responsibilities

73

Another domain of professional practice which poses ethical challenges is community translation, a domain which has only recently been addressed in the discipline of translation studies (see Taibi and Ozolins 2016; and ValeroGarcés 2018 on the use of technology in public services communication). Taibi and Ozolins (2016: 8) characterize community translation as ‘a service offered at a national or local level to ensure that members of multilingual societies have access to information and active participation’. At a time of increasing migration, it is also becoming more and more relevant in countries which are not officially multilingual. Refugees, asylum seekers and minority language speakers share the same public spaces and need to be given access to information in order to be able to participate actively in society. The social function of community translation is thus very prominent. In translating genres such as medical informed consent forms or asylum-seeker statements, the different linguistic and cultural needs of the various communities need to be taken into account. In addition, there is often a social distance between the powerful authorities (e.g. medical doctors, local government officials and police) and the members of the minority communities. This also means that the public authorities may see the role of translators as faithful reproducers of texts and/or as gatekeepers, whereas the members of the individual communities may expect them to act as advocates on their behalf. Such discussions on role perceptions and role boundaries have already been prominent in community interpreting (also labelled ‘public-service interpreting’), with interpreters supposed to act as cultural mediators, brokers or advocates, in contrast to the traditional view of the neutral conduit (see, for example, Mikkelson 2013; and Albl-Mikasa, this volume).

2.2.3.  Researching translators in their workplaces There is now widespread agreement that translation is a social activity. Social contexts condition the production and reception of translations, and social agents are responsible for their creation, distribution and reception. The social contexts in which translators operate are also subject to the intervention of other agents (e.g. authors, revisers or publishing houses). In researching the roles and the power of translators in their actual contexts, scholars have more recently drawn on sociological theories (especially Bourdieu’s 1977 sociology of culture) to investigate the main factors which condition the translational field, to analyse translation practices in specific contexts and their underlying assumptions, norms and policies, and to explore the capital and habitus of the agents as they impact the translation process.

74

The Bloomsbury Companion to Language Industry Studies

Recent research into institutional settings has provided valuable insights into roles and responsibilities of translators. Scholars have been interested in finding out how a particular institution influences ‘how translation is conceptualized and practised, how the translator’s role and identity are ascribed and negotiated, and how complex text trajectories and intertextual chains are formed’ (Kang 2014: 469). For example, Koskinen (2000, 2008, 2014) has done extensive work on translators in the European Commission. Her main interest was in identifying how translators perceive their role within the institution, their social and professional identity, their attitudes towards the European Union (EU), and ‘whether these processes and identifications are reflected in the translations themselves’ (Koskinen 2008: 2). She studied the Finnish translators who work at the European Commission in Luxembourg. Based on ethnographic fieldwork, involving participant observations, field notes, focus group discussions, interviews and questionnaires, she argued that the Finnish translation unit is a world of its own with its own rules. Blurred identities of the translators became obvious when they talked about their work and responsibilities. An analysis of some Finnish translations led her to conclude that the translation ‘strategies tend towards institutionally accepted decisions’ (Koskinen 2008: 145). These tendencies reflect shared norms and values which lead to preferred ways of acting. Koskinen (2008: 44) describes her approach to the investigation of the translation practices and translators’ roles and identities as a nexus approach. As Chesterman (2012: 110) argues, nexus models ‘strongly contextualize the translation process, showing the relations and agents that surround and compose it’, thus accepting ambiguities and acknowledging plurality. Translation processes have also been investigated from a cognitive perspective, focusing on the translator as a processor of texts. Empirical studies using thinkaloud protocols, keystroke logging and/or eye-tracking methods, sometimes combined with retrospective interviews, try to get closer to the decision-making processes of translators as they occur in the act of translating (e.g. TirkkonenCondit and Jääskeläinen 2000; Göpferich, Jakobsen and Mees 2008; O’Brien 2011). Results gained so far have revealed differences between experienced translators and novices, such as processing longer segments and higher speed for experts (e.g. Jakobsen 2005), as well as differences in the approaches among professional translators (e.g. Tirkkonen-Condit 2000). Translation and cognition, however, are also embodied and situated activities, since they are human activities embedded in context. As such, the translator’s decision-making is determined by the concrete situation, assignment, motivation, emotion and so forth. Martín de León (2013: 115) highlights the social and distributed nature

Translators’ Roles and Responsibilities

75

of cognition and argues that ‘researching distributed cognition in translation amounts to studying complex real-life translation projects’. Recent research therefore investigates how physical, organizational, environmental and other relevant factors impact on translation practice (e.g. Ehrensberger-Dow and Massey 2017; Risku et al. 2013).

2.2.4.  Researching the status of translators Other recent research concerns the socio-economic status of translators and perceptions of their roles and responsibilities, both self-perceptions and perceptions held within society (e.g. Pym et al. 2012; Dam and Koskinen 2016; Katan 2011). For more than ten years, Dam and Zethsen have conducted extensive research on the professional role and the occupational status of translators, with a focus on Denmark (e.g. Dam and Zethsen 2009, 2011, 2016; Zethsen and Dam 2010). They investigated the status of translators working in different environments (i.e. in-house company translators, in-house agency translators, freelance translators and staff translators working in the EU institutions). A huge gap was found between the translators’ image of themselves as experts and the way they feel clients and society in general recognize and value their expertise. Most of the translators in all four groups ranked their social and professional visibility as low and considered their jobs to have only low degrees of influence (Dam and Zethsen 2016). Similar research into the status of translators in Finland (e.g. Abdallah 2010) and in Israel (Sela-Sheffy and Shlesinger 2008) has also identified a lack of appreciation, marginalization and a lack of visibility and power in connection with clients (see Sela-Sheffy and Shlesinger 2011 for other countries as well). Katan’s global surveys of the profession (Katan 2009, 2011, 2016) also showed that the translators are ‘focused very much on the text’, with ‘little sign of the mediator or activist’ (Katan 2011: 84). He sees this focus on the text as lowrisk but constraining and suggests that professionals should adopt the role of a transcreator, ‘which would authorize them to take account of the impact of cultural distance when translating’ (Katan 2016: 378). To sum up this section: the focus of research in the discipline of translation studies has moved from the initial concern of translators’ responsibilities towards the message of the source text and the source text author to responsibilities towards the clients and addressees, and more recently to translators’ social responsibilities in a globalized world and their own social involvement. In the translation industry, the focus is predominantly on skills and competences needed to thrive in the business and to rise to the challenges of today’s and

76

The Bloomsbury Companion to Language Industry Studies

tomorrow’s translation market. Especially the rapid developments in translation tools and technologies are posing challenges to the profession, to translator training and also to research. The changing and emerging markets may require new skills and new professional profiles. But how can new profiles be specified? And would it be helpful to use separate labels for specific and/or wider tasks and responsibilities? That is, would a label such as ‘language technology consultant’ or ‘linguistic consultant’ signal wider or different responsibilities compared to ‘translator’? As Jakobsen (2019) argues, (post-)editing of MT output is considerably different from human revision and requires different skills. In the EMT framework, post-editing MT output is included as a skill under translation competence. Technologies have surely changed practices, but to what extent do these practices differ and require new labels? For example, do localization or transcreation involve translation plus additional components, or are they distinct practices? Both Jiménez-Crespo (2019), in reflecting on localization, and Gambier and Munday (2014), in discussing transcreation, argue that the boundaries between these practices are blurred (see also Schäffner 2012 on transediting). On the one hand, different labels challenge professional identities and, on the other, their conceptualizations can contribute to broadening the notion of translation and expanding the limits of translation studies.

3.  Informing research through the industry Translation studies research has often been inspired by didactic interests and, in particular, often in respect of translation competence development. Trainers and researchers recognize that they need to know what the industry is like in order to design training programmes which prepare graduates for professional practice (e.g. Olohan 2007; Dunne 2012; Drugan 2013). The surveys on employers’ expectations of graduates of translator training programmes and on translators’ status, mentioned above (Section 2.2.4), have led to insights which have been used to enhance the professionalization element in university programmes, illustrated, among others, by the numerous case studies of good practice produced by the OPTIMALE network (2011). More recently, translation companies or translation departments in institutions have become sites for investigating actual workplace practices, including the roles and responsibilities of translators. Ethnographic fieldwork (e.g. observations and/or interviews with translators) has been a dominant methodology for such research (e.g. Risku 2010; Risku et al. 2013). However, it is not yet a widespread feature that the

Translators’ Roles and Responsibilities

77

translation industry actually commissions research, with a few exceptions such as the projects commissioned by the Directorate-General for Translation (DGT) on the size of the language industry (European Commission 2009), on the status of the translation profession in the EU (Pym et al. 2012) or on translation as a method of language learning (Pym, Malmkjaer and Gutiérrez-Colón 2013). The EMT project mentioned above (Section 2.2.1) can be characterized as an industry initiative as well. It was launched in 2008 by the DGT, a major employer of translators that was motivated by its need to have an adequate supply of highly qualified translators available to meet its requirements and, by extension, the requirements of the wider translation market in the EU. Since its beginning, the EMT has thus been a joint endeavour of the profession (i.e. the DGT), academia and the translation industry, represented by professional associations such as the European Union Association of Translation Companies (EUATC), the Globalization and Localization Association (GALA) and the European Language Industry Association (ELIA). There have also been other types of industry-initiated research, such as the study into potential language barrier effects on trade patterns of small and medium-sized enterprises (SMEs), commissioned by the UK’s Trade and Investment (UKTI; now known as the Department for International Trade). Two reports (Foreman-Peck and Wang 2014; Foreman-Peck and Zhou 2014) showed that poor language skills cost the UK £48 billion a year in lost exports, or a 3.5 per cent loss to GDP, and that there is a lack of awareness of the multilingual communication needs of SMEs. These studies, however, were based on estimations, arrived at on the basis of formulas and correlations of variables, and thus not based on actual empirical analyses. The UK’s Association of Translation Companies (ATC) was therefore interested in gaining evidence of how SMEs deal with language and translation needs and in particular, whether investing in translation had led to an increase in their customer base and their turnover. This led to a joint research project between ATC and the School of Languages and Social Sciences at Aston University, Birmingham. Students enrolled on the MA translation programme conducted case study investigations of individual companies, interviewing company managers and staff members, and analysing relevant documents (e.g. texts sent for translation to a translation company). They investigated the following questions, among others: Which language needs do companies have and how do they solve them? For which purposes are professional language service providers (LSP) used? A main finding was that the companies’ translation needs are not incorporated into the strategic planning of the SMEs but are mainly customer-driven. Staff members also have different

78

The Bloomsbury Companion to Language Industry Studies

opinions on the value of professionally produced translations, as evident in statements such as ‘translation does not have any influence on the corporate performance’ on the one hand, and ‘translations bring customers closer to the company’ on the other hand (Dostal 2015; Haupt 2015). Although this industry-initiated research did not explicitly focus on translators’ roles and responsibilities, it had benefits for all partners involved: the translation industry (the ATC in this case) can gather case studies of good practice which can be used to show the benefits of working with professionals. The SMEs can reflect on their language needs and the way they handle them and decide on potential changes. Students, as junior researchers, can conduct concrete empirical research of value both to the industry and translation studies. Olohan (2017: 4) too recommends that more research collaboration should be conducted between university programmes and LSPs and comments that ‘industry sponsorship of MA dissertations, commonplace elsewhere, remains relatively rare in translation’. It is to be expected that industry and university partnerships, such as the EMT project, will result in more industry-initiated research for master’s dissertations. The work placements, which have already become a good example of cooperation between academia and industry, could be used to identify relevant topics.3

4.  Informing the industry through research Not much of the research conducted within the discipline of translation studies has been motivated by the needs of the language industry, and thus might not have immediate relevance for the language industry. There is, however, a growing body of research which can have an impact on improving the professional practice and the status of translators. Examples include the recent research into the ergonomics of translation, since the findings of workplace studies can serve as a valuable source of information on best practices. Understanding the various factors which impact on professional translators’ roles and responsibilities can also lead to more effective workflows and contribute to process and quality management. Kuznik (2016), for example, analysed the work content of in-house translators in small and medium-sized industrial enterprises as it is embedded in the whole work process. She focused on the impact of the organizational context on the translation activity in a Polish company, using observation as her main method to identify the involvement of the translators in the execution of the wider

Translators’ Roles and Responsibilities

79

work process. Her analysis demonstrated a highly heterogeneous nature of the job content, which made her argue that the job is composed of several roles: ‘writer + translator + reviewer + interpreter + organizer of commercial events + commercial secretary, with all activities being mixed and inseparable’ (Kuznik 2016: 227). Since the processing of multilingual information can be seen as the common denominator for all of these roles, she suggested that ‘multilingual personal assistant’ or ‘expert in multilingual communication’ would be better names for the real content of the job and would emphasize the strategic value of the role (see also the discussion on labels in Section 2.2.4 above). The studies by Risku and her collaborators (e.g. Risku et al. 2013) too have revealed that the translation sector is becoming increasingly differentiated. Using participatory observation they investigated the workflow in a translation services company and the interaction between the agents involved. They argue that the increasingly professionalized and specialized work in the company resulted in different role profiles, a diversification of jobs and a change of competences required. For example, the project managers’ ‘need for specific language and cultural expertise has increasingly decreased’ (Risku et al. 2013: 38). There is not yet enough evidence that research has impacted the translation industry in a direct way. Research topics in translation studies are not yet sufficiently inspired by market needs, and graduates of doctoral programmes usually do not enter private enterprises or government institutions. The EU-funded Marie Curie Initial Training Network TIME set out to change this pattern. The TIME project (‘Translation Research Training: An Integrated and Intersectoral Model for Europe’) involved four academic partners and four associated partners from industry. One of the four subprojects, entitled ‘Multimedia and Multimodal Translation: Accessibility and Reception’, is a useful illustration of the impact of research for the industry. The industry partners were two small/medium-sized private companies from Spain and Portugal which provide services for audiovisual production, such as live and pre-recorded subtitling for the deaf and hard of hearing, and audio description for the blind and visually impaired. This project investigated the psycholinguistic mechanisms underlying the reading of subtitles in films by deaf, hard of hearing and hearing viewers. The research produced empirical data on the impact of subtitling strategies, which were of relevance to the company since they helped them ‘quantify the importance of accessibility, and thus help sell their products to their clients’ (Pym et al. 2014: 10). The company introduced some changes in their AVT strategies, and the translators enhanced their awareness that ensuring accessibility belongs

80

The Bloomsbury Companion to Language Industry Studies

to their professional responsibilities. Accessibility is also addressed by Neves (2016), who illustrates the creation of multilingual descriptive audio-guides for museums that involved translation students collaborating with curators and visitors (see also Jankowska, this volume). The research on the impact of CAT tools and external resources on translation processes and translators’ behaviour can be given as another example of research that can inform the industry. For example, Bundgaard, Christensen and Schjoldager’s (2016) observational study of an authentic MT-assisted TM translation in the translators’ usual work environment demonstrated that the tool had both an aiding and a restraining influence on the translation process. Massey, Riediger and Lenz (2008) also noticed that technological aids can interfere with the cognitive process of translation by slowing it down and diminishing the quality of the product. A reason for this can also be deficient knowledge of (automated) tool features or ineffective interaction with user interfaces. The findings of such ergonomic research on the utilization of translators’ tools in their workplaces can also be used by providers and developers to ‘enhance the usability of their tools and resources’ (Massey and Ehrensberger-Dow 2011: 10).

5.  Concluding remarks To conclude, although some of the research mentioned above did not exclusively aim at investigating roles and responsibilities of translators, the findings about processes and agency in specific settings provide insights into translators’ roles and responsibilities as well. They basically indicate that the very practice of translation has changed. In the past, translators usually worked with printed texts and their role was almost exclusively seen in ensuring accurate linguistic transfer. Nowadays, translators also work with multilingual and multimodal digital genres such as web content, videogames or apps, and they interact with computers as well as colleagues and customers. Their responsibilities go beyond pure linguistic transfer and include, among others, mastering CAT tools, negotiating with clients and operating in (virtual) teams. The ongoing digitalization of our world has led to translation becoming a semi- or even fully automated task, and translators are experiencing a networked way of working. The advances in adaptive and neural machine translation will have further impacts on workflows and responsibilities, already reflected in the increase in post-editing MT output. It is in particular research into the ergonomics of

Translators’ Roles and Responsibilities

81

translation and workplace studies which has illustrated that advances and other developments in the physical environments, technical tools and task variation have changed the translators’ roles and responsibilities. There is, however, still a gap in our knowledge in how exactly translators operate or to which extent responsibilities for translation differ from those for activities such as transcreation and localization. Future research could thus specifically investigate roles and responsibilities of translators in their workplace environments. Detailed case studies in local situations could investigate how responsibilities differ if the activity is called translation, localization or transcreation; how a specific background and/or particular training affects the decisions a translator makes; how CAT tools impact working methods, translation quality and the translator’s (or the transcreator’s) role perception; or how exactly translators negotiate their roles in specific settings. Such research can be undertaken jointly by scholars and representatives of the industry in the widest sense, not only the translation industry, and projects can also aim at solving communication problems of social relevance. A good example of cooperation of translation scholars, engineers and emergency operators is the EU-funded project Slándáil (Security System for Language and Image Analysis), which involves the development of a graphic user interface of a system for the management of emergencies from natural disasters (Musacchio and Panizzon 2017). The developments in the translation industry require constant reflection on how we define translators’ roles and responsibilities, which has consequences for translator education as well. Directors of training programmes need to bear in mind that their graduates will work in a rapidly changing market with fuzzy boundaries between roles and new professional profiles emerging. Initiatives such as the creation of a Europe-wide professional profile for audio description, funded by the EU and carried out in cooperation with universities, service providers and user associations (Perego 2017) are to be welcomed. Since ‘practices and identities are never frozen’ (Gambier and Munday 2014: 27) and roles and responsibilities are constantly evolving, skills such as adaptability, critical thinking and problem solving are essential to function effectively in the industry of tomorrow. So far, research has mostly been reactive. As Liu (2018: 21) argues, ‘new practice often becomes reality prior to research or availability of research data’, and ‘research data can bring into focus aspects of existing practice previously unknown but with significant hitherto hidden implications’. Research conducted jointly by academia and the industry, including research on roles

82

The Bloomsbury Companion to Language Industry Studies

and responsibilities of translators, should thus be developed in a more forwardlooking, strategic way. Close cooperation in identifying the topic, in specifying the research questions and the research methods, and in carrying out the project can bring results of relevance both to the industry and to academia and can also lead to evidence-based policy-making.

Notes 1 An online Community and Workplace for Language Professionals, https://www. proz.com/ (accessed 17 June 2018). 2 The competence areas in the EMT (2009) framework were translation service provision competence (with an interpersonal and a production dimension), language competence, intercultural competence (with a sociolinguistic and a textual dimension), information mining competence, thematic competence and technology competence. 3 There are certainly more examples of student projects of relevance to the industry, although projects actually initiated by the industry are difficult to locate.

References Abdallah, K. (2010), ‘Translators’ Agency in Production Networks’, in T. Kinnunen and K. Koskinen (eds), Translators’ Agency, 11–46, Tampere: Tampere University Press. ATA American Translators Association (2017), ‘About Us’. Available online: http:// www.atanet.org/aboutus/about_ata.php (accessed 12 January 2018). ATA American Translators Association (2010), ‘Code of Ethics and Professional Practice’. Available online: https://www.atanet.org/governance/code_of_ethics.php (accessed 4 July 2018). Baker, M. (2006), Translation and Conflict: A Narrative Account, London/New York: Routledge. Bassnett, S. and A. Lefevere, eds (1990), Translation, History and Culture, London: Pinter. BDÜ Bundesverband der Dolmetscher und Übersetzer e.V. (2017), ‘Medieninformation’. Available online: http://bdue.de/fileadmin/files/PDF/ Presseinformationen/Pressemappen/BDUe_Hintergrundinformationen.pdf (accessed 12 January 2018). Boéri, J. and C. Maier, eds (2010), Compromiso Social y traducción/interpretación. − Translation/interpreting and Social Activism, Granada: Ecos.

Translators’ Roles and Responsibilities

83

Bourdieu, P. 1977 [1972], Outline of a Theory of Practice, trans R. Nice, Cambridge, MA: Cambridge University Press. Bundgaard, K., T. P. Christensen and A. Schjoldager (2016), ‘Translator-computer Interaction in Action - an Observational Process Study of Computer-aided Translation’, Journal of Specialised Translation, 25: 106–30. Available online: http:// www.jostrans.org/issue25/art_bundgaard.pdf (accessed 12 February 2018). Chesterman, A. (2001), ‘Proposal for a Hieronymic Oath’, The Translator, 7 (2): 139–54. Chesterman, A. (2012), ‘Models in Translation Studies’, in Y. Gambier and L. van Doorslaer (eds), Handbook of Translation Studies 3, 108–14, Amsterdam/ Philadelphia: John Benjamins. CIoL Chartered Institute of Linguists (2017), ‘Code of Professional Conduct’. Available online: https://www.ciol.org.uk/sites/default/files/Code.pdf (accessed 12 January 2018). Dam, H. V. and K. Koskinen, eds (2016), ‘The Translation Profession: Centres and Peripheries’, Journal of Specialised Translation, 25: 2–14. Available online: http:// www.jostrans.org/issue25/issue25_toc.php (accessed 12 January 2018). Dam, H. V. and K. K. Zethsen (2009), ‘Who Said Low Status? A Study on Factors Affecting the Perception of Translator Status’, Journal of Specialised Translation, 12: 1–36. Available online: http://www.jostrans.org/issue12/art_dam_zethsen.php (accessed 12 January 2018). Dam, H. V. and K. K. Zethsen (2011), ‘The Status of Professional Business Translators on the Danish Market: A Comparative Study of Company, Agency and Freelance Translators’, Meta, 56 (4): 976–97. Dam, H. V. and K. K. Zethsen (2016), ‘“I think it is a wonderful job”. On the solidity of the translation profession’, Journal of Specialised Translation, 25: 174–87. Available online: http://www.jostrans.org/issue25/art_dam.php (accessed 12 January 2018). DePalma, D. (2017), ‘The Fastest-Growing LSPs’, Common Sense Advisory Blogs. Available online: http://www.commonsenseadvisory.com/Default.aspx?Contentt ype=ArticleDetAD&tabID=63&Aid=47253&moduleId=390&utm_term=At%20 Our%20Blog%20-%20Fastest%20Growing&utm_campaign=Weekly%20 Update%3A%20The%20New%20Frontier%20of%20AI&utm_content=email&utm_ source=Act-On+Software&utm_medium=email&cm_mmc=Act-On%20Software_-email-_-Weekly%20Update%3A%20The%20New%20Frontier%20of%20 AI-_-At%20Our%20Blog%20-%20Fastest%20Growing (accessed 12 January 2018). DePalma, D., V. Hedge, H. Pielmeier and R. G. Stewart (2014), The Language Services Market: 2014. Annual Review of the Translation, Localization, and Interpreting Services Industry, Cambridge, MA: Common Sense Advisory. DePalma, D., H. Pielmeier, A. Lommel and R. G. Stewart (2017), The Language Services Market: 2017. Annual Review of the Services and Technology Industry That Supports Translation, Localization, and Interpreting, Cambridge, MA: Common Sense Advisory.

84

The Bloomsbury Companion to Language Industry Studies

DePalma, D., H. Pielmeier and R. G. Stewart (2018), ‘The Language Services Market: 2018’. Available online: http://www.commonsenseadvisory.com/AbstractView/ tabid/74/ArticleID/48585/Title/TheLanguageServicesMarket2018/Default.aspx (accessed 15 June 2018). Dostal, L. (2015), ‘Translation Policies and Practices in a Company: A Case Study’, MA diss., School of Languages and Social Sciences, Aston University, Birmingham. Drugan, J. (2013), Quality in Professional Translation, London/New York: Bloomsbury. Dunne, K. J. (2012), ‘The Industrialization of Translation: Causes, Consequences and Challenges’, Translation Spaces 1 (1): 141–66. Durban, C. (2010), The Prosperous Translator. Advice from Fire Ant & Worker Bee, FA & WB Press, https://prosperoustranslator.com/. Ehrensberger-Dow, M. and G. Massey (2017), ‘Socio-technical Issues in Professional Translation Practice’, Special Issue of Translation Spaces, 6 (1): 104–21. EMT Expert Group (2009), ‘Competences for Professional Translators, Experts in Multilingual and Multimedia Communication’. Available online: https://ec.europa. eu/info/sites/info/files/emt_competences_translators_en.pdf (accessed 12 January 2018). EMT European Master’s in Translation (2017), ‘Competence Framework 2017’. Available online: https://ec.europa.eu/info/sites/info/files/emt_competence_ fwk_2017_en_web.pdf (accessed 16 June 2018). European Commission (2009), Study on the Size of the Language Industry in the EU, Kingston upon Thames: The Language Technology Centre Ltd. FIT Fédération Internationale des Traducteurs / International Federation of Translators (1994), ‘Translator’s Charter’. Available online: http://www.fit-ift.org/translatorscharter/ (accessed 4 July 2018). Foreman-Peck, J. and Y. Wang (2014), ‘The Costs to the UK of Language Deficiencies as a Barrier to UK Engagement in Exporting. A Report to UK Trade & Investment’. Available online: https://www.gov.uk/government/uploads/system/uploads/ attachment_data/file/309899/Costs_to_UK_of_language_deficiencies_as_barrier_ to_UK_engagement_in_exporting.pdf (accessed 12 January 2018). Foreman-Peck, J. and P. Zhou (2014), ‘Firm-Level Evidence for the Language Investment Effect on SME Exporters’. Available online: http://patrickminford.net/ wp/E2014_6.pdf (accessed 12 January 2018). Gambier, Y. and J. Munday (2014), ‘A Conversation between Yves Gambier and Jeremy Munday about Transcreation and the Future of the Professions’, Cultus: The Journal of Intercultural Mediation and Communication, (7): 20–36. Available online: http:// www.cultusjournal.com/files/Archives/conversation_gambier_munday_3_p.pdf (accessed 18 June 2018). Gentzler, E. and M. Tymoczko, eds (2002), Translation and Power, Amherst/Boston: University of Massachusetts Press.

Translators’ Roles and Responsibilities

85

Göpferich, S. (2009), ‘Towards a Model of Translation Competence and Its Acquisition: The Longitudinal Study Transcomp’, in S. Göpferich (ed.), Behind the Mind: Methods and Results in Translation Process Research, 11–37, Copenhagen: Samfundslitteratur. Göpferich, S. (2012), ‘Tracing Strategic Behaviour in Translation Processes: Translation Novices, 4th-semester Students and Professional Translators Compared’, in S. Hubscher-Davidson and M. Borodo (eds), Global Trends in Translator and Interpreter Training: Mediation and Culture, 240–66, London: Bloomsbury. Göpferich, S., A. L. Jakobsen and I. Mees, eds (2008), Looking at Eyes. Eye-tracking Studies of Reading and Translation Processing, Copenhagen: Samfundslitteratur. Gouadec, D. (2007), Translation as a Profession. Amsterdam/Philadelphia: John Benjamins. Haupt, Y. (2015), ‘Translation Policies and Practices in a Company: A Case Study’, MA diss., School of Languages and Social Sciences, Aston University, Birmingham. Holz-Mänttäri, J. (1984), Translatorisches Handeln: Theorie und Methode, Helsinki: Suomalainen Tiedeakatemia. Hubscher-Davidson, S. and M. Borodo, eds (2012), Global Trends in Translator and Interpreter Training: Mediation and Culture, London: Bloomsbury. Hurtado Albir, A. (2010), ‘Competence’, in Y. Gambier and L. van Doorsleaer (eds), Handbook of Translation Studies 1, 55–9, Amsterdam/Philadelphia: John Benjamins. Hurtado Albir, A., ed. (2017), Researching Translation Competence by PACTE Group, Amsterdam/Philadelphia: John Benjamins. Inghilleri, M. and S. Harding, eds (2010), Translation and Violent Conflict. Special issue of The Translator 16 (2). ISO International Organization for Standardization (2015) ‘ISO17100:2015. Translation services - requirements for translation services’. Available online: https://www.iso. org/standard/59149.html (accessed 18 June 2018). ITI Institute of Translation and Interpreting (2014), ‘Translation: Getting it Right. A Guide to Buying Translation’. Available online: https://www.iti.org.uk/pdf/ getting-it-right/english-uk.pdf (accessed 12 January 2018). ITI Institute of Translation and Interpreting (2016), ‘Code of Professional Conduct’. Available online: https://www.iti.org.uk/attachments/article/154/Code%20of%20 Professional%20Conduct%2029%2010%202016.pdf (accessed 12 January 2018). ITI Institute of Translation and Interpreting (2017), ‘Our Background’. Available online: https://www.iti.org.uk/about-iti (accessed 12 January 2018). Jääskeläinen, R. (2007), ‘The Changing Position of “the Translator” in Research and in Practice’, Journal of Translation Studies, 10 (1): 1–16. Jakobsen, A. L. (2005), ‘Investigating expert translators’ processing knowledge’, in H. V. Dam, J. Engberg and H. Gerzymisch-Arbogast (eds), Knowledge Systems and Translation, 173–89, Berlin/New York: Mouton de Gruyter. Jakobsen, A. L. (2019), ‘Moving Translation, Revision and Post-editing Boundaries’, in H. V. Dam, M. Nisbeth Brøgger and K. K. Zethsen (eds), Moving Boundaries in Translation Studies, 64–80, Abingdon: Routledge.

86

The Bloomsbury Companion to Language Industry Studies

Jiménez-Crespo, M. A. (2019), ‘Localisation Research in Translation Studies: Expanding the Limits or Blurring the Lines?’ in H. V. Dam, M. Nisbeth Brøgger and K. K. Zethsen (eds), Moving Boundaries in Translation Studies, 26–44, Abingdon: Routledge. Kadrić, M. and K. Kaindl, eds (2016), Berufsziel Übersetzen und Dolmetschen, Tübingen: Francke. Kang, J.-H. (2014), ‘Institutions Translated: Discourse, Identity and Power in Institutional Mediation’, Perspectives, 22 (4): 469–78. Katan, D. (2009), ‘Translation Theory and Professional Practice: A Global Survey of the Great Divide’, Hermes, 42: 111–53. Katan, D. (2011), ‘Occupation or Profession. A Survey of the Translators’ World’, in R. Sela-Sheffy and M. Shlesinger (eds), Identity and Status in the Translational Professions, 65–87, Amsterdam/Philadelphia: John Benjamins. Katan, D. (2016), ‘Translation at the Cross-roads: Time for the Transcreational Turn?’, Perspectives, 24 (3): 365–81. Katan, D. and H. Liu (2017), ‘From Cassandra to Pandora - Thoughts on Translation and Transformation in a Multilingual and Multicultural Future’, Cultus: The Journal of Intercultural Mediation and Communication, 10: 11–26. Available online: http:// www.cultusjournal.com/files/Archives/Conversation_Liu_Katan.pdf (accessed 18 June 2018). Kiraly, D. (2000), Social Constructivist Approach to Translator Education. Empowerment from Theory to Practice, Manchester: St. Jerome. Kiraly, D. (2006), ‘Beyond Social Constructivism: Complexity Theory and Translator Education’, Translation and Interpreting Studies, 6 (1): 68–86. Kiraly, D. (2013), ‘Towards A View of Translator Competence as an Emergent Phenomenon: Thinking Outside the Box(es) in Translator Education’, in D. Kiraly, S. Hansen-Schirra and K. Maksymski (eds), New Prospects and Perspectives for Educating Language Mediators, 197–223, Tübingen: Narr. Koskinen, K. (2000), ‘Institutional Illusions. Translating in the EU Commission: A Finnish Perspective’, The Translator, 6 (1): 49–65. Koskinen, K. (2008), Translating Institutions. An Ethnographic Study of EU Translation, Manchester: St Jerome. Koskinen, K. (2014), ‘Institutional Translation: The Art of Government by Translation’, Perspectives, 22 (4): 479–92. Kuznik, A. (2016), ‘Work Content of In-house Translators in Small and Medium-sized Industrial Enterprises. Observing Real Work Situations’, Journal of Specialised Translation, 25: 213–31. Available online: http://www.jostrans.org/issue25/art_ kuznik.php (accessed 12 January 2018). Lefevere, A. (1992), Translation, Rewriting, and the Manipulation of Literary Fame, London/New York: Routledge. Liu, H. (2018), ‘Help or Hinder? The Impact of Technology on the Role of Interpreters’, FITISPos International Journal, 5 (1): 13–32. Available online: http://www3.uah.es/ fitispos_ij/OJS/ojs-2.4.5/index.php/fitispos (accessed 23 June 2018).

Translators’ Roles and Responsibilities

87

Marketing Survey (2017), ‘Marketing as a Freelance Translator: How Much a Part of the Job?’ Available online: https://de.surveymonkey.com/results/SM-DXJN5ZRJ/ (accessed 12 January 2018). Martín de León, C. (2013), ‘Who Cares if the Cat is on the Mat? Contributions of Cognitive Models of Meaning to Translation’, in A. Rojo and I. Ibarretxe-Antuñano (eds), Cognitive Linguistics and Translation. Advances in some Theoretical Models and Applications, 99–122, Berlin/Boston: de Gruyter. Massardo, I., J. van der Meer and M. Khalilov (2016), ‘TAUS The Translation Technology Landscape Report 2016’. Available online: https://www.taus.net/ think-tank/reports/translate-reports/taus-translation-technology-landscape-report2016#content (accessed 12 January 2018). Massey, G., H. Riediger and S. Lenz (2008), ‘Teaching Instrumental Competence in an elearning Environment: A Swiss Perspective’, in R. Dimitriu and K.-H. Freygang (eds), Translation Technology in Translation Classes, 175–83, Iasi: Institutul European. Massey, G. and M. Ehrensberger-Dow (2011), ‘Technical and Instrumental Competence in the Translator’s Workplace: Using Process Research to Identify Educational and Ergonomic Needs’, ILCEA Revue, 14: 1–14. Mikkelson, H. (2013), ‘Community Interpreting’, in C. Millán and F. Bartrina (eds), The Routledge Handbook of Translation Studies, 389–401, London/New York: Routledge. Moscara, M. (2013), ‘Guidelines: Professional Issues for a Faithful Translation’, Tesi di laurea in lingua e traduzione-lingua inglese, University of Salento. Musacchio, M. T. and R. Panizzon (2017), ‘Localising or Globalising? Multilingualism and Lingua Franca in the Management of Emergencies from Natural Disasters’, Cultus: The Journal of Intercultural Mediation and Communication, 10: 92–107. Available online: http://www.cultusjournal.com/files/Archives/Musacchio_Panizzon. pdf (accessed 18 June 2018). Neves, J. (2016), ‘Enriched Descriptive Guides: A Case for Collaborative Meaningmaking in Museums’, Cultus: The Journal of Intercultural Mediation and Communication, 9 (2): 137–53. Available online: http://www.cultusjournal.com/ files/Archives/Cultus9_2016_2/cultus%20_9_Volume_2_2016.pdf (accessed 18 June 2018). Nord, C. (1997), Translating as a Purposeful Activity. Functionalist Approaches Explained, Manchester: St. Jerome. O’Brien, S., ed. (2011), Cognitive Explorations of Translation. London: Continuum. Olohan, M. (2007), ‘Economic Trends and Developments in the Translation Industry: What Relevance for Translator Training?’, The Interpreter and Translator Trainer, 1 (1): 37–63. Olohan, M. (2017), ‘Machine Translation and LSPs’, in ITI (ed), Where Are We Headed? Trends in Translation and Interpreting 2017. Available online: https://www.iti.org.uk/ more/news/819-iti-publishes-trends-e-book (accessed 12 January 2018).

88

The Bloomsbury Companion to Language Industry Studies

OPTIMALE (2011), ‘Optimizing Professional Translator Training in a Multilingual Europe’. Available online: http://www.ressources.univ-rennes2.fr/service-relationsinternationales/optimale/ (accessed 12 January 2018). PACTE (2003), ‘Building a Translation Competence Model’, in F. Alves (ed.), Triangulating Translation: Perspectives in Process Oriented Research, 43–66, Amsterdam/Philadelphia: John Benjamins. PACTE (2009), ‘Results of the Validation of the PACTE Translation Competence Model: Acceptability and Decision Making’, Across Languages and Cultures, 10 (2): 207–30. PACTE (2011), ‘Results of the Validation of the PACTE Translation Competence Model: Translation Project and Dynamic Translation Index’, in S. O’Brien (ed.), Cognitive Explorations of Translation, 30–56, London: Continuum. Perego, E. (2017), ‘Audio Description: A Laboratory for the Development of a New Professional Profile’, Rivista Internazionale di Tecnica della Traduzione / International Journal of Translation, 19: 131–42. Available online: https://www. openstarts.units.it/bitstream/10077/17355/1/Ritt19_Perego.pdf (accessed 22 June 2018). ProZ Forum (2017). Available online: https://www.proz.com/forum/ (accessed 12 January 2018). Pym, A., F. Grin, C. Sfreddo and A. Chan (2012), The Status of the Translation Profession in the European Union, European Commission, Luxembourg: Publications Office of the European Union. Pym, A., K. Malmkjaer and M. Gutiérrez-Colón Plana (2013), Translation and Language Learning: The Role of Translation in the Teaching of Language in the European Union. A Study. European Commission, Luxembourg: Publications Office of the European Union. Pym, A., G. González Núñez, M. Miquel-Iriarte, S. Ramos Pinto, C. Teixeira and W. Tesseur (2014), ‘Work Placements in Doctoral Research Training in the Humanities: Eight Cases from Translation Studies’, Across Languages and Cultures, 15 (1), 1–23. Risku, H. (2010), ‘A Cognitive Scientific View on Technical Communication and Translation: Do Embodiment and Situatedness Really Make a Difference?’, Target, 22 (1): 94–111. Risku, H., N. Rossmanith, A. Reichel and L. Zenk (2013), ‘Translation in the Network Economy: A Follow-up Study’, in C. Way, S. Vandepitte, R. Meylaerts and M. Bartłomiejczyk (eds), Tracks and Treks in Translation Studies: Selected Papers from the EST Congress Leuven 2010, 29–48, Amsterdam/Philadelphia: John Benjamins. Schäffner, C. (2012), ‘Rethinking Transediting’, Meta, 57 (4): 866–83. Sela-Sheffy, R. and M. Shlesinger (2008), ‘Strategies of Image-making and Status Advancement of Translators and Interpreters as a Marginal Occupational Group. A Research Project in Progress’, in A. Pym, M. Shlesinger and D. Simeoni (eds), Beyond Descriptive Translation Studies. Investigations in Homage to Gideon Toury, 79–90, Amsterdam/Philadelphia: John Benjamins.

Translators’ Roles and Responsibilities

89

Sela-Sheffy, R. and M. Shlesinger, eds (2011), Identity and Status in the Translational Professions, Amsterdam/Philadelphia: John Benjamins. Simeoni, D. (1998), ‘The Pivotal Status of the Translator’s Habitus’, Target, 10 (1): 1–39. Taibi, M. and U. Ozolins (2016), Community Translation, London/New York: Bloomsbury Academic. Tirkkonen-Condit, S. (2000), ‘Uncertainty in Translation Processes’, in S. TirkkonenCondit and R. Jääskeläinen (eds), Tapping and Mapping the Processes of Translation and Interpreting, 123–42, Amsterdam/Philadelphia: John Benjamins. Tirkkonen-Condit, S. and R. Jääskeläinen, eds (2000), Tapping and Mapping the Processes of Translation and Interpreting, Amsterdam/Philadelphia: John Benjamins. Toury, G. (1995), Descriptive Translation Studies and Beyond, Amsterdam/Philadelphia: John Benjamins. UN United Nations (2017), ‘Resolution Adopted by the General Assembly on 24 May 2017. 71/288. The Role of Professional Translation in Connecting Nations and Fostering Peace, Understanding and Development’. Available online: http://www. un.org/en/ga/search/view:doc.asp?symbol=A/RES/71/288 (accessed 14 June 2018). Valero-Garces, C. (2018), ‘Introduction. PSIT and Technology. Challenges in the Digital Age’, FITISPos International Journal, 5 (1): 1–6. Available online: http:// www3.uah.es/fitispos_ij/OJS/ojs-2.4.5/index.php/fitispos (accessed 23 June 2018). Venuti, L. (1995), The Translator’s Invisibility, London: Routledge. Vermeer, H. J. (1987), ‘What Does It Mean to Translate?’, Indian Journal of Applied Linguistics, 13: 25–33. von Flotow, L. (1997), Translation and Gender. Translating in the ‘Era of Feminism’, Manchester: St Jerome. Wolf, M. and A. Fukardi, eds, (2007), Constructing a Sociology of Translation, Amsterdam/Philadelphia: John Benjamins. Yoo, T and C. J. Jeong (2017), ‘Consolidating the Professional Identity of Translators: The Role of Citizenship Behaviors’, Target, 29 (3): 361–87. Zethsen, K. K. and H. V. Dam (2010), ‘Translator Status: Helpers and Opponents in the Ongoing Battle of an Emerging Profession’, Target, 22 (2): 194–211.

90

5

Interpreters’ roles and responsibilities Michaela Albl-Mikasa

1. Introduction The notion of ‘role’ has long been at the centre of discussions in the field of community interpreting, while in conference interpreting the focus has largely been on interpreting ‘quality’. Other hot topics in conference interpreting include competence, memory, aptitude and didactics or professionalism, while in community interpreting these are (codes of) ethics, certification and professionalization. Buzzwords in conference interpreting are skills and knowledge, process and performance, accuracy and cognitive load. In community interpreting these are settings, agency, interlocutors, norms and standards, as well as neutrality, (in)visibility and impartiality. This chapter sets out to explore the differences in emphasis placed upon these constructs and how they interrelate. It attempts to bring together the various threads of the debate revolving around the abovementioned notions and put them into perspective for a better understanding of interpreters’ roles and responsibilities. The differences in orientation reflect the different starting points of the two interpreting subdisciplines. Conference interpreting developed in the aftermath of the two world wars in Europe (long, that is, notation-based, consecutive mode after the First World War and simultaneous mode after the Second World War), facilitating much-needed and generally sought-after multilateral peace negotiations and the foundation of international organizations and multinational companies. Geographic centralization and impetus from diplomacy and business made for a unified movement towards a profession with considerable status and remuneration, a high degree of organization in the form of professional associations as well as university-based formal education and qualifications. As a response, the International Association of Conference

92

The Bloomsbury Companion to Language Industry Studies

Interpreters (AIIC), founded in 1953, took the lead in defining the guidelines of the profession and laid them down in its professional standards and code of ethics (AIIC 2014a,b). By observing these, conference interpreters were and still are assumed to know what to do and how to act and behave in order to promote successful multilingual communication. Interpreting in the public services domain (e.g. in courts, at hospitals, schools, or police stations, or for the local authorities), by contrast, evolved out of a muchless-valued necessity to cater to migrants’ needs, especially after the waves of migration that characterized the second half of the twentieth century. Different regional approaches developed in countries like Australia and Sweden (which were forerunners in organized forms of community interpreting; Niska 2002: 136), with different approaches depending on national regulations (Pöchhacker 2015: 67), and, in some cases, with makeshift solutions, especially with regard to interpreter training. While national associations of conference interpreters largely follow AIIC’s guidelines, different national and regional standards have emerged in community interpreting. The demarcations in the latter are generally related to setting (e.g. health, legal, school or asylum) whereas few distinctions are made for conference interpreting despite it being done for events as varied as bidirectional business meetings, monologic lecture-like medical conferences, legal depositions, trade union meetings, press conferences, talk shows in the media, technical seminars, diplomatic exchanges, refugee and asylum-related negotiations or human rights debates. Adopting a unified approach, as was done in conference interpreting, would have required not only similar political will, popular acceptance and commercial interest but also a limited number of established (i.e. more commonly spoken) languages. The difficulty in predicting incoming languages and the fact that they are often languages of limited diffusion are still major impediments to managing and professionalizing community interpreting and establishing higher-order education, although there are now some university-level (BA, rarely MA) training programmes available. As a consequence, community interpreting has largely been a semi-professionalized activity with varying degrees of training, lower status and remuneration, and regulated by a much greater variety of codes, specifying standards of good practice and conduct as well as ethical guidelines in accordance with specific regional and institutional requirements (Bancroft 2015). At the same time, while community interpreting is gaining importance, recognition and even political and financial support in light of the influx of refugees and asylum seekers in Europe, conference interpreting is being relegated to a lesser needed, less sought-after service as a consequence of the global spread of English as a

Interpreters’ Roles and Responsibilities

93

lingua franca (ELF; see Donovan 2011). Moreover, the international norm on community interpreting (ISO 13611 2014) is contributing to the standardization of definitions, perceptions and requirements. Finally, the push towards remote interpreting in both conference and community interpreting is likely to act as a catalyst for further harmonization.

2.  Situated cognition as a joint paradigm for both conference and community interpreting Despite such rapprochement between the two subdisciplines, in the research literature, scholars in the field of community interpreting make a point of carving out ‘radical differences between conference and dialogue interpreting’ (Merlini 2015: 28). This can be traced back to Wadensjö’s (1995: 111) proposal of a distinct ‘interactionist, non-normative, dialogical approach to studies of interpretermediated talk’ on the basis of Goffman’s (1981) ‘participation framework’. The emphasis is put on the ‘real-life dynamics of interpreter-mediated encounters’ (Pöchhacker 2016: 169), on interpreters’ active involvement in coordinating and managing the interaction and on their mediating in addition to relaying or translating functions (Wadensjö 1995). Moreover, rooted in Wadensjö’s 1998 observation of the inadequacy of the Swedish Guide to Good Practice in regulating actual interpreting practices in real-life encounters, the emphasis is placed on the gap between professional ideology (codes and standards) and professional practice (interpreters’ actual performance in a situational setting). Subsequent investigations have highlighted the dynamic sociocultural perspective of interpreting by setting a discourse in interaction (DI) paradigm for community interpreting apart from a cognitive processes (CP) paradigm for conference interpreting (Pöchhacker 2015: 69). Consequently, the DI paradigm focuses on communicative interactions, conversation management and role behaviour, while the CP paradigm focuses on mental processing, capacity management and strategic behaviour. Yet, while the sociological (non-cognitive) perspective is used to distinguish community interpreting from conference interpreting, the CP paradigm is also sometimes invoked in support of the interactive view of interpreting. Thus, the ‘prevailing constructivist view of (all) communication’, according to which ‘meaning is not a stable entity but is co-created between the parties to an interaction’, is assumed to ‘foreground the agency of interpreters and their effect upon communication dynamics’ (Van Dam 2017: 229). In the field of

94

The Bloomsbury Companion to Language Industry Studies

community interpreting, the notion of agency is generally evoked to contrast with the conference interpreters’ alleged invisibility and non-involvement. Taking the cognitive socio-constructivist framework seriously, however, would perhaps mean not to place the emphasis on agency. Rather it would mean describing agency or ‘spaces of freer ability to determine interactional moves’ (Hlavac 2017: 198) in terms of the dependency of any interpreting act on a great number of cultural and contextual determinants. Based on a substantial body of empirically validated research into situated cognition (Robbins and Aydede 2009), the (socio-)constructivist paradigm is first and foremost a unifying and non-discriminating foundation that integrates the cognitive and the sociosituational dimensions and allows for active as well as passive involvement, a greater and lesser degree of visibility or impartiality, and a leaning towards either speaker in the dialogue. In my view, it functions as an umbrella paradigm for both conference and community interpreting and demonstrates how the various constructs and notions outlined above come together. Such an understanding or theoretical framework should help not only to put the above notions into perspective but also to clarify previous misinterpretations of conference interpreting that were introduced to distinguish it from community interpreting. According to Angelelli (2000: 581), for example, conference interpreting studies reflect ‘a psycholinguistic and neurolinguistics approach to interpretation’ and are ‘limited to the question of linguistic codes and language or information processing’ at the expense of a consideration of communication in a wider sense. Contrary to this assessment, in the early 1990s, in light of the functional and skopos-oriented perspective in translation studies, Pöchhacker (1994) proposed an analytical framework covering the overarching communicative event of conference interpreting. This ranged from the contractual conditions for an assignment to specific situational constraints and looked at the features of different (prototypical) interpreter-mediated conference-like events. Even before that, considerations of communicative and macro-process-oriented aspects had been included in (conference) interpreting studies, especially in cognitive models of interpreting. Here, again, misunderstandings prevail. As Gile points out, his effort models have been taken by prominent translation scholars to be ‘purely cognitive’, when, in fact, they ‘are socially situated, explicitly so, in communication models which take on board the sometimes conflicting interests of various actors, in loyalty principles, in social norms and psychological constraints which govern strategic and tactical decisions’ (Gile 2017: 245). Against this backdrop, the

Interpreters’ Roles and Responsibilities

95

discussion below reconsiders the central notions outlined above and places them in a socio-constructivist framework with a view to redressing imbalances and misrepresentations in the literature. As Kalina (2015: 67) points out, interpreters’ responsibilities involve ‘putting the act of translating in context, taking intertextual, interpersonal and social factors into account’. Such considerations of the situatedness of the interpreting act demand an awareness of relevant ‘norms, conventions, values and behavioral patterns used by all the partners involved in translation processes in a certain culture’ (Prunč 2012: 2) as stipulated in codes of ethics. Codes of ethics and conduct are meant to specify what a ‘good’ interpreter should do, what interpreters’ responsibilities are and what can be expected of them. They can both become ‘a tool for decision-making’ and ‘provide a yardstick for the profession to measure the ethical quality of professional performance’ (Driesen 2003: 72). However, as interpreters must be aware, codes cannot cover all the moral and ethical challenges they may be faced with (Prunč 2012: 329). It is thus the interpreters’ responsibility to take the appropriate decision and aid communication bearing in mind the norms and conventions that apply in the wider context while operating under the constraints of the circumstances or working conditions of a given assignment (e.g. more or less competent or cooperative behaviour among other participants, emotionally laden environments or other difficult working conditions). Interpreting quality, then, is a function of how well all of this can be integrated in the decision-making process. From a cognitive–constructivist point of view, interpreters can consider and put into practice in the situation (performance) only what they know (competence). That is, norms can be brought to bear on the decision-making process only insofar as they are mentally represented by the interpreter (and all other participants for that matter). What is not known cannot be implemented even under the most favourable of conditions, hence the need for training and professionalization. Similarly, since situational conditions include interlocutors’ mental representations and behaviour, good performance can be achieved only as a function of such knowledge on all sides. Some of these interrelations have been described in the literature. For instance, in translation studies, norms have been defined as knowledge of what is regarded as correct and appropriate behaviour (Schäffner 1999: 1). Such ‘competence to make consistent ethical decisions’ (Prunč 2012: 8) involves awareness of ethical codes, without which interpreting quality is ‘beyond […] control’ (Kalina 2015: 73). I would add that this knowledge is learnt and moulded and strengthened

96

The Bloomsbury Companion to Language Industry Studies

through experience (performance level). The resulting expertise (competence level) feeds back into the actual interpreting activity (performance level) and shapes ‘the performed and assumed roles enacted by interlocutors’ in a highly ‘empirical sense’ (Hlavac 2017: 198). The diagram in Figure 5.1 models these interdependencies for dialogic settings. The model highlights, and therefore magnifies, the interpreter’s position. Set against his or her individual skills and disposition, the interpreter engages in a process of knowledge construction and further skill acquisition through training. The competence built up in the process determines his or her performance and decision-making process. Experience gained while actually interpreting (performance level) feeds back into competence building, which is further enriched by inputs from codes of ethics and standards of good practice. All of this influences (top-down) role behaviour as part of the interpreter’s performance and also the quality of this performance and resulting product or target text. At the same time, what an interpreter does or does not know of interpreting norms and standards (competence) is not tantamount to how he or she implements this knowledge in the situated interpreting act (performance). Bottom-up factors, such as the working conditions, the participants and other situational factors in the immediate communicative situation, also affect and shape performance. The same applies to the wider institutional or setting-related environment, even spanning to the regional and cultural background in which the interpreting task is embedded. Naturally, the performance of the participants/

Figure 5.1  Dialogue interpreting as a situated cognitive activity.

Interpreters’ Roles and Responsibilities

97

interlocutors in the situation, which influences the interpreter’s performance, is likewise determined by their competence and background knowledge. This may to a greater or lesser extent include knowledge of interpreter-mediated communication, ethical considerations or an understanding of the needs and requirements of the other participants. For instance, doctors’ lack of insight into interpreters’ needs has been found to negatively affect communication (Sleptsova et al. 2015). These interdependencies lead to a constant trade-off on the performance level. Factors arising from the situation, from participants, for instance, may become less influential as interpreters’ knowledge and experience increase, because the more interpreters know and are guided top-down by their background knowledge, the less they are affected by bottom-up factors. While it is true that the circumstances of an assignment tend to override coded specifications (e.g. in the event of a moral dilemma, impartiality or confidentiality may be unmaintainable, see Kalina 2015: 78), it is also true that the interpreter can influence the situation through an in-depth understanding of these specifications. What we see here is a cognitively framed understanding of Goffman’s oftencited 1961 sociological notions of the normative role as covered by official codes and standards, the typical role of an individual in a certain position and the role performance or actual role behaviour, that is, ‘how a person enacts a role that reflects situational dynamics and characteristics such as aptitude or personality’ (Hlavac 2017: 199). While interpreters form an understanding (representation) of their role and responsibilities as part of their competence, actual role behaviour is enacted, variable, dynamic and fluid depending on the multifactorial scenario. Although some analysts in community interpreting present this as a novelty, role (behaviour) has, in fact, always (or at least since Goffman) been conceived not as a static, reified entity, but as an enactment process. I would add that this enactment process is at the interface of competence and performance because of its dependence on mentally encoded or represented experience and expertise. The model is also reminiscent of the classic distinction between competence and performance (Chomsky 1963) or even language and parole with respect to social norms and individual manifestations of such norms in the act of speaking (de Saussure 1916). What this cognitive framework adds is the processing dimension, that is, how the various influencing factors (both knowledge and situational determinants) interact in the communication process to shape interlocutors’ role behaviour and decision-making and thus the communication outcome or product and its quality.

98

The Bloomsbury Companion to Language Industry Studies

The next section outlines the primary research focal points with a view to adjusting some of the imbalances that have come up in the discussion of interpreters’ roles and responsibilities.

3.  Research focal points In conference interpreting, the question of the interpreter’s role is said to have largely been governed by the metaphorical concept of interpreters as conduits, with the normative expectation that an interpreter’s primary function is ‘to act as a passive and emotionless channel which solely has to convey a sense that is inherent in the message as delivered by the speaker’ (Zwischenberger 2015: 107). According to Zwischenberger, this normative expectation has assumed the proportions of a supernorm, having been propounded by AIIC, the most influential norm-setting authority in the field as well as by university-level training and senior interpreting colleagues. In fact, the results of her web-based survey among AIIC members, which yielded 704 responses, indicate that this norm is often the benchmark that practising interpreters strive for (2015: 108). The findings also reveal some significant relationships with sociodemographic background variables, in that women show a higher degree of loyalty to the speaker/original than their male counterparts and that all interpreters tend to act more independently as they gain experience. Zwischenberger cautions that the conduit model of the uninvolved interpreter may be a convenient image for AIIC marketing purposes, but that adherence to such an image may actually ‘relegate the interpreter to a secondary position’ (2015: 108). At the same time, as pointed out by Gile (2017: 241), ‘under many circumstances which arise in conference settings, in press conferences, in technical and commercial seminars, in political speeches […] the neutral conduit model is a useful ideal, still widely accepted within the profession as the default standard’. During in-depth interviews, experienced conference interpreters have indeed described their ideal as merging into the speaker and going unnoticed or as creating in the audience the feeling of having listened to the original itself rather than to an interpreter (Albl-Mikasa 2012). However, they have also made clear that such ‘invisibility’ stops at the meta level of communication. The professionals who took part in the interviews wanted to be noticed for their contribution to successful communication and were keen to receive feedback on their performance. Moreover, depending on the situational conditions, they seemed to feel quite free about taking all necessary liberties and departing from

Interpreters’ Roles and Responsibilities

99

such ideal (conduit-like) rendering procedures in the interests of facilitating and enabling the communication flow in the (many) moments when conditions are far from ideal. Similar reports have been found in other investigations as highlighted by Diriker (2015:182): While most interpreters define their idealized role as a neutral mediator between languages, interpreters’ readers, blogs and memoirs […], which contain anecdotal accounts and real-life stories by interpreters, are full of references to instances in which interpreters shape their delivery not only with regard to the linguistic or semantic aspects of the original speech, but also with regard to situational, psychological and other factors.

In fact, this could be the explanation for Zwischenberger’s (2017) finding that respondents’ main source of satisfaction is fulfilling their own standards and, with respect to dissatisfaction, their failure to do so. While Van Dam (2017: 237) cautions against a ‘scenario’ where ‘each individual (conference) interpreter sets her own standards for professional behavior’, I would suggest that the adherence of experienced interpreters to their own standards is simply a sign of their professional recognition of the liberties they must take to balance out situational conditions; more than anything else, it is testimony to the authority with which they take their decisions. It is indeed professional awareness of what a situation requires that gives them the flexibility of not having to hold too closely to rules and regulations. I would therefore consider professional interpreters’ sense of their ‘own standards’ to be a hallmark of the ‘unified professional identity that imposes its own role on those who require their services, rather than be[ing] relegated to the role conferred on them by each individual client in each individual encounter’ that Van Dam (2017: 238) calls for. Professional conference interpreters are tuned into their (AIIC rule-based) role behaviour, which includes professional adjustment to working conditions. Above and beyond such internalized competence, interview respondents in the above-mentioned study (Albl-Mikasa 2012) expressed that they felt they were on an equal footing with their customers, knowing full well, however, that this may not always be a shared feeling. It is perhaps not the interpreters’ vision of in/visibility and non-involvement but that entertained by many customers that ought to be questioned. It may be because of the combination of the assumption of the conduit ideal and the concurrent tacit understanding of the professional liberty to depart from it on the ground that role labels have played a negligible part in the context of conference interpreting, while in community interpreting, role definitions

100

The Bloomsbury Companion to Language Industry Studies

and denominations have taken centre stage. One reason is that community interpreting takes place in highly heterogeneous dialogic interactional settings, such as doctor–patient encounters, courtroom hearings, police, immigration, asylum and welfare service interviews, ‘between professionals and clients who differ in social and cultural background, religious beliefs, status and level of education’ (Niska 2002: 137). ‘Large differences in status and power between the parties and high personal stakes of individual communication events make ethical issues much more salient, with implications for the interpreter’s role’ (Gile 2017: 241). As Cambridge (2002) reminds us, there are therefore multiple pressures on community interpreters as human beings with emotions. Conference interpreting, by contrast, predominantly takes place between equal partners whose cultural differences are often offset by shared know-how. The principal roles under discussion in the specialist literature on community interpreting are the following: (1) conduit (faithful renderer of the original utterance); (2) clarifier (filter, embellisher and speech assistant); (3) cultural broker (gatekeeper); (4) advocate (for the powerful participant or for the powerless participant) (Niska 2002: 138, with Hale’s 2008 distinctions in brackets). By contrast, Leanza (2005: 186) proposes the following fourfold typology of interpreters’ roles: (1) system agent, (2) integration agent, (3) community agent and (4) linguistic agent, whereby the linguistic and system agents tend to lean towards the healthcare provider side. What makes role definition even more problematic are conflicting role perceptions and expectations on the part of clients, interpreters and service providers (Pöllabauer 2007) as well as service providers’ lack of experience in how to work with interpreters and their misconceptions about interpreters, which may negatively affect the interpreters’ role and task performance (Mättää 2015). As a consequence, the debate revolving around the traditional notion of ‘role’ has moved from more static descriptions towards emphasizing the dynamic nature of discourse, situated activity and participant roles. There has been a recognition of role shifts within single assignments, the influences on these shifts (e.g. balancing out personal ethics, empathy and professional ethics, and speaker fidelity), the triggers for them (e.g. untenable cultural assumptions on the speaker’s part) and their consequences (e.g. softening of confrontation) (Leung Sin-Man and Gibbons 2008). Interpreters ‘as cultural and linguistic mediators and as social beings’ have been found to ‘continuously negotiate their identity with their clients while interpreting’ rather than ‘operating within a third “invisible” space between interlocutors’ (Nakane 2009: 1, my emphasis).

Interpreters’ Roles and Responsibilities

101

From single role definitions and role shifts, the focus has moved to a functionalist perspective of interpreter-mediated interaction as ‘a process of co-construction and teamwork shaped by a given situational and institutional context’ (Pöchhacker and Kolb 2009: 133, my emphasis) and to ‘a revised understanding of interpreting as a comprehensively collaborative activity’ with ‘the interpreter’s role as a weaver-together of narratives and a connector of people’ (Turner 2007: 181, my emphasis). From such an approach that ‘scrutinizes the co-created dialogue between the interpreter, the consumers who are present, and the context of their collective encounter’ (Dean and Pollard 2011: 155) and stresses interpreters’ active involvement, researchers have proceeded to highlight quality in interpreting as ‘a shared responsibility’ and emphasize the importance of all participants in interpreter-mediated encounters being aware of each other’s roles and respective goals. Recognition of the shared responsibility for the quality of the outcome (Ozolins and Hale 2009) has led to demands for an understanding of all participants’ mutual requirements and expectations. From their large-scale study, Sleptsova et al. (2015: 363) conclude, for instance, that ‘it is important that medical professionals and interpreters discuss their roles and expectations before every clinical consultation’. This more teleological (or outcome-focused) framework of ethical reasoning for community interpreting is fully in line with the socio-constructivist view of conference interpreting but has not redressed prevailing imbalances. Bahadir (2017: 122) suggests that a new era has been ushered in, since it seems quite normal to consider ‘a self-conscious, self-reflexive, responsible, active and participating interpreter as a third party to a communication situation’. However, this does not make the interpreter, as Bahadir suggests, a highly visible participant under all circumstances. There is of course more active involvement in a dialogue situation than in monologic booth interpreting, in consecutive than in simultaneous interpreting, and in mental health than in court settings. What matters are working conditions, which also differ within given settings, so that there are even situations in conference interpreting, where ‘no complete communication takes place unless the interpreter intervenes and explains what is not clear’, as Eraslan (2008: 26) found in an analysis of interpreted interactions. Cultural differences also come into play, for example, in a Japanese context where impartiality may not be a core feature. Recent approaches to dynamic role modelling seem to fall more closely in line with the view taken here, namely the role-space model by LlewellynJones and Lee (2014). Based on signed and spoken-language interpreted

102

The Bloomsbury Companion to Language Industry Studies

interactions, the interpreter’s role and behaviour is seen as emerging from the dynamics of any given situation and the three-dimensional ‘role-space’ the interpreter creates within three axes describing the interpreter’s positioning(s). These three axes are participant/conversational alignment (in the Goffmanian sense), interaction management (including self-authored interjections, Roy 2000) and ‘presentation of self ’ (Mead and Morris 1965). Llewellyn-Jones and Lee (2014) take for granted excellent language skills as well as the freedom of the interpreter to make appropriate professional decisions. These facets fall in line with the interdependence between skill acquisition/competence and the decision-making process as outlined in Figure 5.1, and point to the pivotal role of professionalization or knowledge building. Similarly, agency or ‘spaces of freer ability to determine interactional moves’ (Hlavac 2017: 198) necessitate knowledge of how to move. As is well known from simultaneous conference interpreting, interpreters are first and foremost out to survive (cf. Gile’s tightrope hypothesis, 2009 and Monacelli 2009). Coping efforts are also observed in community interpreting, where empirical investigations have unearthed numerous instances of interpreters’ deficiencies (e.g. Sleptsova et al. 2015). Problems may result from a lack of awareness of the institutional macro- and situational micro-level processes. A finding that community interpreters systematically omit hedges and phatic expressions, which have a major function in securing compliance and building trust and rapport, points to the importance of interpreters being familiar with institutional interaction patterns in the respective setting (e.g. medical; see Albl-Mikasa and Hohenstein 2017). Institutional patterns and norms become part of the implicit knowledge that participants working in those institutions develop (Christensen 2011: 9). When such knowledge is not shared by freelance interpreters, the common ground for interactional moves is missing and agency may become ineffectual or counterproductive. Basic knowledge of ethical standards is also a prerequisite for the reflection that informs behavioural norms and ethical positions (Bahadir 2004). It is paramount for interpreters to be able to ‘make informed choices with regard to codes of ethics e.g., when to adhere to them and when to ignore what they deem to be their irrelevant or impossible demands’ (Inghilleri 2005: 75, my emphasis), especially since community interpreters are ‘privy to complex or highly privileged information – whether related to national security, or personal trauma or difficulty, or sensitive business negotiations’ (Ozolins 2015: 319). Especially in military interpreting, the setting may override all other considerations. Many interpreters in conflict and war zones, whether professionals or not, find

Interpreters’ Roles and Responsibilities

103

themselves in situations ‘where “the right thing to do” cannot be calculated or predetermined, but can only ever be decided in the event itself ’ (Inghilleri 2008: 212) – irrespective of any norms such as impartiality. Any degree of involvement, visibility or neutrality thus depends on the setting, culture, jurisdiction, formality and mode of interpreting (simultaneous vs. consecutive). The more complex and fragile the context, as in the case of military and humanitarian interpreters, the more unpredictable the role behaviour in the actual interpreting situation becomes (Moser-Mercer 2015). Having said that, a competent professional should know how to assess the needs in any particular assignment environment and have the skills to satisfy them as best as working conditions allow. I would therefore understand interpreters’ responsibility as striving for professionalization, being aware of setting-specific background knowledge and interaction patterns, informing co-participants of the prerequisites for successful mediated communication, developing an understanding of all interlocutors’ expectations, and of seeking clarification (and post-assignment improvement measures). In all of this, it should be remembered that interpreters work to earn a living and that they should show professional dedication rather than devotion and self-sacrifice, which is often still an explicit or tacit expectation in community interpreting, and especially in signed language interpreting (Van Dam 2017: 234). As Sasso (2017) aptly points out, it is not always understood by (community) interpreters that they are business people by default. While an interpreter should not fall into the trap of becoming the contracting party’s accomplice in the hope of receiving future assignments (Kalina 2015: 79), they should never lose sight of the entrepreneurial dimension of their work in reflecting on their role and responsibilities.

4.  Informing research through the industry In reviewing the literature on role and responsibilities in community interpreting, it becomes apparent that researchers tend to go into a situation, observe interpreters and deduce what they are doing (or often are not doing). Shlesinger (1989: 113) finds a methodological problem when norms are sometimes defined on the basis of individual interpreter behaviour, instead of trying to compile comprehensive and representative corpora for analysis. What researchers could learn from the conference interpreting profession is to focus more on analysing not the individual interpreter, but the situation and needs in particular settings

104

The Bloomsbury Companion to Language Industry Studies

(which professionals seek to understand) and devise training concepts to enable interpreters to enter the situation and facilitate communication by meeting the situational requirements. To that end, it would also be useful for researchers to be aware of how the professional associations and industry see themselves. The codes of ethics and standards of good practice that feed into interpreters’ knowledge, for instance, are an abstraction of a very complex reality featuring multiple stakeholders with varying and sometimes conflicting interests and expectations among the parties involved (clients, providers and interpreters). They often apply to a specific region or institution, setting standards for an ‘association’s membership’ or with ‘a wider function, detailing ground rules and techniques for practice and serving as educational documents, for users of services as much as for practitioners’ (Ozolins 2014: 369). In an in-depth analysis of sixteen codes and sets of standards, Van Vaerenbergh (2019) demonstrates the extent to which codes, standards, rules of conduct, guidelines and principles – while sharing core values – vary across countries and continents, especially in the healthcare sector. Van Vaerenbergh’s analysis shows just how important it is for researchers to look closely at the codes issued by professional associations so as to avoid premature conclusions. For instance, in view of the above-mentioned diversity among codes and standards, it comes as a surprise to see the infiltration of conference interpreting ideals into community interpreting deplored in the research literature. The latter contains statements to the effect that standards of practice ‘based on conference interpreting […] were transferred in toto to court and medical settings’ (Angelelli 2006: 176) or that ‘the same rules and principles laid down for conference interpreters’ were adopted, ‘guaranteeing confidentiality, maximum objectivity, impartiality, and self-effacement’ (Merlini 2015: 28), while neither AIIC’s Code of Professional Ethics nor its Professional Standards (2014a,b) make even a single mention of any such term. In fact, not only are codes of ethics not fully applicable across different settings within the same country or across countries and cultures, but even within one particular type, such as legal settings, it is difficult to ‘have a universal code of ethics adopted by all court interpreters, similar to that known in other professions […], in particular the medical profession’ (Driesen 2003: 69). This is due not only to regional differences but also to a lack of corporate identity, heterogeneous backgrounds or ethical dilemmas in real interpreting work. Van Dam (2017: 237), however, warns that the ‘adoption of a uniform set of norms and standards […] that applies across areas of practice is a prerequisite for any

Interpreters’ Roles and Responsibilities

105

occupational group to claim professional status’. There is, therefore, a push towards establishing standards of a global reach, such as the ISO 13611 (2014) standard on community interpreting, and the general one on interpreting, which is underway at the time of writing.1 Such international standards are developed by groups of experts from professional practice and academia. There are other instances in which some of the research literature reflects misconceptions and sometimes seemingly ideological views regarding not only conference interpreting but also the demarcation lines between conference and community interpreting. The profession’s perspective might help to resolve these. Thus, the conduit model, which admittedly meets with greater acceptance in conference than in community interpreting, does not, in fact, refer to literal interpreting, because ‘some linguistic and information manipulation of the speech [is always] necessary to optimize the transmission of information to serve the speaker’s interest’ (Gile 2017: 241) and ‘interpreting takes place not only between languages (language codes) but between cultures’ (Kalina 2015: 77). Kalina (2015: 79) therefore stresses that active cultural mediation, explaining cultural peculiarities, is part of both community and conference interpreting. There is thus a tension between the claim that ‘the conduit model is dismissed as invalid’ (Van Dam 2017: 237) and that, depending on circumstances, it can be ‘a useful ideal’ (Gile 2017: 241). Resolving the misunderstandings surrounding the conduit model seems important in light of some authors seeking to see the root of fundamental conflict for interpreters in this area. According to Tate and Turner (2002: 375), propagation of the conduit model may force the interpreter ‘into the position of making their discretionary choices and exercising power covertly’. Professional interpreters, however, are unlikely to identify with that view. As discussed in Section 3, they feel they have a professional right to take necessary liberties. Yet another misconception revolves around the notion of empathy. Community interpreting scholars claim that, prompted by the developments in the field of conference interpreting, the ‘basic equation between professionality and emotional detachment resulted in the stigmatization of any form of interpreters’ empathic involvement’ (Merlini and Gatti 2015: 143–4). While community interpreters may indeed be more likely to interpret under emotionally disturbing circumstances, conference interpreters do not shun empathy. Empathy, unlike sympathy or compassion, is not emotion-based but refers to identifying with someone else’s feelings and understanding what he or she is trying to say. As was made very explicit in in-depth interviews among professional conference interpreters, this is a fundamental ingredient of their skill (Albl-Mikasa 2014).

106

The Bloomsbury Companion to Language Industry Studies

In fact, nowhere in the field of conference interpreting is empathy repudiated. As an aside, conference interpreters, in the same interviews, spoke in no uncertain terms of high levels of emotionality in the face of European Parliament debates on human rights violations, torture, starvation or expulsion. Finally, many professional interpreters would probably not see themselves as proactive participants but might view ‘agency’ in a more constructivist sense of the term, namely as space for flexible role behaviour that can be more active or passive depending on the various influencing factors in a specific assignment situation. Rather than evoking a divide between the active and passive, visible and invisible, neutral and involved, research efforts should aim to provide empirical substance to fend off customers’ often unclear understanding of interpreters’ roles (including their assumptions of interpreters’ invisibility).

5.  Informing the industry through research Two areas where research is called upon to obtain insights, not least to inform the industry, are the increasing use of international English or ELF and of new technologies, the two major factors impacting conference interpreting today according to an AIIC account (Jones 2014). In a global survey on interpreters’ self-perception of their professional status, the ‘unholy alliance between ELF and modern internet-based technology’ was felt to threaten the profession and downgrade the once much-admired service to a simple commodity in the eyes of clients (Gentile and Albl-Mikasa 2017: 60). Such commoditization of conference, and also of community, interpreting can only be stopped by not repeating the mistakes of the translation industry, where translation has been turned into a product rather than a (high-level) service. According to Sasso (2017), the profession, with its ‘orientation toward […] competence, standards and regulations’ has achieved published international standards and networks as well as training and professional development opportunities, but has failed to connect with the industry and its focus on business and growth. To redress the balance, interpreters must have a say in what technology is used and how it should be used, because only they ‘can understand the technology that can enhance their work’. Sasso continues by pointing out that, in their capacity as ‘independent business people’ or ‘solopreneurs’, interpreters must have agency to advocate for themselves, effect change in the right direction, embrace technology and occupy the market place. However, neither associations nor networks have the resources to supply the necessary arguments, so support is needed from the research community. A case in point is Reithofer’s (2013)

Interpreters’ Roles and Responsibilities

107

empirical findings of significantly higher comprehension scores on the part of conference participants listening to the interpreter than those listening directly to the ELF speaker. While a number of relevant studies into ITELF (interpreting, translation and English as a lingua franca; for an overview see Albl-Mikasa 2017) have demonstrated the effects of the increasing growth of source texts produced by non-native English speakers, virtually nothing is known about how interpreters’ role behaviour may change in the process. It is unlikely that it will remain unchanged in the face of some loss of control over performance, given that ‘the quality of the interpretation is largely a function of the quality of the source text (ST) to be interpreted […] and […] the interpretation cannot really be any better than the respective ST’ (Kalina 2006: 253, my translation); or that ‘the speaker factor, i.e. the way a particular speaker constructs and delivers his/her speech’ is one of the strongest determinants of interpreting difficulty (Gile 2009: 200). At the same time, there is some evidence that interpreters are uniquely placed to compensate the negative effects of non-native speech for listeners who might have even more difficulty understanding non-standard input than interpreters. The ways in which and the extent to which problematic source texts can and should be improved, or accommodation measures taken, should be ascertained through translation and interpreting research. Besides improving overall comprehension, this would provide interpreters with additional tangible examples of the added value they can bring to multilingual events. This also applies to the delicate issue of ‘translating English for Specific Purposes (ESP) between non-native English interlocutors’ (Tripepi Winteringham 2012: 142). In community interpreting settings, the growing use of ELF is an additional complicating factor that is poorly understood by service providers and clients. Preliminary research findings suggest that misconceptions among providers abound (e.g. that interpreters and migrants are sure to understand each other in lingua franca English or that utterances get double-checked when English is spoken) and that constant monitoring by the official (who might know some English) may undermine the interpreting process (Mättää 2017). Research might also help revisit or broaden the interpreter’s role and scope. In times of spreading ELF, in particular, interpreters may partially venture into project management, vendor management or localization (Horváth 2016) or rebrand as international intercultural communication consultants in dealing with the changing landscape of multilingualism (Albl-Mikasa 2017). This involves changes in interpreters’ self-concept (i.e. how they conceive of their role and responsibilities), as suggested by Massey and Wieder (2019) for translators. Based on a comprehensive survey among translators, translation

108

The Bloomsbury Companion to Language Industry Studies

project managers and corporate communication specialists, they call for a re-definition and broadening of translators’ ‘professional opportunities and range, developing an extended self-concept as intercultural mediators, adaptive transcreators and language consultants’. In view of the ‘objective deterioration of working conditions in the interpreting profession over the past 40 years’ (Gile 2017: 244), new avenues have to be investigated. These may include across-theboard interpreting, that is, an increasing number of interpreters involved in both conference and community interpreting, since the market for the former seems to be shrinking and the latter growing in importance. Another item on the research agenda is a broader and overarching appreciation of ethics and role behaviour in increasingly complex communicative situations. Finally, research is needed to inform the industry and the interpreting profession in the face of increasing use of videoconference (VC) systems in courts and other interpreting contexts. According to Devaux’s (2016) interviews among practising legal interpreters in the UK, respondents perceive their role as changing because of a lack of opportunity to introduce themselves and clarify their role at the outset, which may compromise the defendant’s perception of the interpreter’s neutrality. Interpreting research should also be done into issues relating to back-channelling, body language and turn management and their impact on role behaviour. While remote interpreting has the potential to open up new job opportunities for interpreters (referred to as the ‘mobility factor’ by Sasso 2017), it is also threatening to push interpreters back into the invisibility trap, given that many clients seem to think that interpreters should remain tucked away, as invisible and aloof of stakeholders as possible. Informing the industry of the complexities of situated cognition as outlined above may help avert naïve expectations of an all-time passive, invisible, neutral and uninvolved interpreter and perhaps even raise the profession’s status in the eyes of end users and contracting customers.

6.  Concluding remarks The notions of interpreters’ roles and responsibilities are at the centre of the discussion in the literature on community interpreting in particular. While, as outlined above, there may be good reasons for this, it is much less plausible that they should serve to drive a wedge between conference and community interpreting practitioners or scholars. In this chapter, I propose a cognitive– constructivist model of dialogue interpreting. The framework is then used to

Interpreters’ Roles and Responsibilities

109

tease apart misconceptions and misrepresentations that have arisen in the process of propagating different paradigms for conference and community interpreting (both of which embrace dialogue interpreting as well as more and less interactive interpreting settings). For instance, ethical norms and guidelines of good practice can only serve as idealized or decontextualized prototypical role models for the settings they were drawn up for. Their implementation not only requires prior knowledge of such standards on the part of the interpreter but necessarily varies according to individual and situational factors in the actual interpreting event. Accordingly, professionalism includes the sovereignty to take liberties, which, according to Prunč (2012: 8), applies in equal measure to community interpreting: ‘Above all, community interpreters need to have the competence to make consistent ethical decisions in the continuum between neutrality and advocacy.’ From such a unifying perspective, conference interpreting and community interpreting are understood to be bilingual situated cognitive activities involving (source speech) comprehension and (target speech) production processes. Situational factors (working conditions, settings, degrees of interaction, etc.) and interpreting modes (predominantly simultaneous in conference and predominantly consecutive in community interpreting) may differ, but both types of interpreting share common threads: meaning construction in bilingual comprehension and production processes, complex multilingual communication management, issues of stress and cognitive load as well as expert skills. Training, remuneration and status should therefore be at similar levels. However, this can be difficult not only due to political and societal circumstances but also because of researchers pursuing disjunctive paths. In addition to the industry and profession informing research and vice versa, should research perhaps seek to inform research to a greater degree?

Note 1 A German national norm for conference interpreting (DIN 2347) was introduced in March 2017. See https://www.din.de/en/wdc-beuth:din21:268573014

References AIIC (2014a), ‘Code of Professional Ethics’, AIIC Website. Available online: http://aiic. net/p/6724 (accessed 2 October 2017). AIIC (2014b), ‘Professional Standards’, AIIC Website. Available online: http://aiic. net/p/6746 (accessed 2 October 2017).

110

The Bloomsbury Companion to Language Industry Studies

Albl-Mikasa, M. (2012), ‘Interpreting Quality in Times of English as a Lingua Franca (ELF): New Variables and Requirements’, in L. N. Zybatow, A. Petrova and M. Ustaszewski (eds), Translation Studies: Old and New Types of Translation in Theory and Practice. Proceedings of the 1st International Conference TRANSLATA. Translation & Interpreting Research: Yesterday? Today? Tomorrow? May 12–14, 2011, Innsbruck, 267–73, Frankfurt am Main: Peter Lang. Albl-Mikasa, M. (2014), ‘Receptivism. An Intertraditional Approach to Intuition in Interpreter and Translator Competence’, in L. N. Zybatow and M. Ustaszewski (eds), Bausteine Translatorischer Kompetenz oder was macht Übersetzer und Dolmetscher zu Profis. Innsbrucker Ringvorlesungen zur Translationswissenschaft VII, Forum Translationswissenschaft, 51–81, Frankfurt am Main: Peter Lang. Albl-Mikasa, M. (2017), ‘ELF and Translation/Interpreting’, in J. Jenkins, W. Baker and M. Dewey (eds), The Routledge Handbook of English as a Lingua Franca, 369–83, London/New York: Routledge. Albl-Mikasa, M. and C. Hohenstein (2017), ‘Cognition in Community Interpreting: The Influence of Interpreter’s Knowledge of Doctor-Patient Interaction’, in D. Perrin and U. Kleinberger (eds), Doing Applied Linguistics – Enabling Transdisciplinary Communication, 130–8, Berlin: de Gruyter. Angelelli, C. (2000), ‘Interpretation as a Communicative Event: A Look through Hymes’ Lenses’, Meta, 45 (4): 580–92. Angelelli, C. (2006), ‘Validating Professional Standards and Codes – Challenges and Opportunities’, Interpreting, 8 (2): 175–93. Bahadir, S. (2004), ‘Moving In-Between: The Interpreter as Ethnographer and the Interpreting-Researcher as Anthropologist’, Meta, 49 (4): 805–21. Bahadir, S. (2017), ‘The Interpreter as Observer, Participant and Agent of Change – The Irresistible Entanglement between Interpreting Ethics, Politics and Pedagogy’, in M. Biagini, M. S. Boyd and C. Monacelli (eds), The Changing Role of the Interpreter – Contextualising Norms, Ethics and Quality Standards, 122–45, London/ New York: Routledge. Bancroft, M. (2015). ‘Community Interpreting – A Profession Rooted in Social Justice’, in H. Mikkelson and R. Jourdenais (eds), The Routledge Handbook of Interpreting, 217–35, London: Routledge. Cambridge, J. (2002), ‘Interlocutor Roles and the Pressures on Interpreters’, in C. Valero Garcés and G. Mancho Barés (eds), Community Interpreting and Translating: New Needs for New Realities, 119–24, Alcalá de Henares: Universidad de Alcalá. Chomsky, N. (1963), Syntactic Structures, 3rd edn, S-Gravenhage: Mouton. Christensen, T. P. (2011), ‘User Expectations and Evaluation: A Case Study of a Court Interpreting Event’, Perspectives, 19 (1): 1–24. Dam, H. V. (2017), ‘Interpreter Role, Ethics and Norms – Linking to Professionalization’, in M. Biagini, M. S. Boyd and C. Monacelli (eds), The Changing Role of the Interpreter – Contextualising Norms, Ethics and Quality Standards, 224–39, London/New York: Routledge.

Interpreters’ Roles and Responsibilities

111

De Saussure, F. (1916), Cours de Linguistique Générale, Lausanne: Librairie Payot. Dean, R. K. and R. Q. Pollard (2011), ‘Context-based Ethical Reasoning in Interpreting: A Demand Control Schema Perspective’, The Interpreter and Translator Trainer (ITT), 5 (1): 155–82. Devaux, J. (2016), ‘When the Role of the Court Interpreter Intersects and Interacts with New Technologies’, CTIS Occasional Papers, 7: 4–21. Diriker, E. (2015), ‘Conference Interpreting’, in H. Mikkelson and R. Jourdenais (eds), The Routledge Handbook of Interpreting, 171–85, London/New York: Routledge. Donovan, C. (2011), ‘Ethics in the Teaching of Conference Interpreting’, The Interpreter and Translator Trainer (ITT), 5 (1): 109–28. Driesen, C. (2003), ‘Professional Ethics’, in E. Hertog (ed.), Aequitas: Access to Justice across Language and Culture in the EU. Grotius project 2001/GRP/015, 69–73, Antwerp: Departement Vertaaler-Tolk-Lessius Hogeschool. Eraslan Gercek, S. (2008), ‘Cultural Mediator or Scrupulous Translator? Revisiting Role, Context and Culture in Consecutive Conference Interpreting’, in P. Boulogne (ed.), Translation and Its Others. Selected Papers of the CETRA Research Seminar in Translation Studies 2007, 1–33. Available online: https://www.arts.kuleuven.be/cetra/ papers/files/eraslan-gercek.pdf (accessed 7 April 2018). Gentile P. and M. Albl-Mikasa (2017), ‘Everybody Speaks English Nowadays. Conference Interpreters’ Perception of the Impact of English as a Lingua Franca on a Changing Profession’, Cultus, 10: 53–66. Gile, D. (2009), Basic Concepts and Models for Interpreter and Translator Training, 2nd edn, Amsterdam/Philadelphia: John Benjamins. Gile, D. (2017), ‘Norms, Ethics and Quality – The Challenges of Research’, in M. Biagini, M. S. Boyd and C. Monacelli (eds), The Changing Role of the Interpreter – Contextualising Norms, Ethics and Quality Standards, 240–50, London/New York: Routledge. Goffman, E. (1961), Encounters: Two Studies in the Sociology of Interaction, Indianapolis: Bobbs-Merrill. Goffman, E. (1981), Forms of Talk, Philadelphia: University of Pennsylvania Press. Hale, S. B. (2008), ‘Controversies over the Role of the Court Interpreter’, in C. Valero Garcés and A. Martin (eds), Crossing Borders in Community Interpreting. Definitions and Dilemmas, 99–121, Amsterdam/Philadelphia: John Benjamins. Hlavac, J. (2017), ‘Brokers, Dual-role Mediators and Professional Interpreters: A Discourse-based Examination of Mediated Speech and the Roles that Linguistic Mediators Enact’, The Translator, 23 (2): 197–216. Horváth, I., ed. (2016), The Modern Translator and Interpreter, Budapest: Eötvös University Press. Inghilleri, M. (2005), ‘Mediating Zones of Uncertainty: Interpreter Agency, the Interpreting Habitus and Political Asylum Adjudication’, The Translator, 11 (1): 69–85. Inghilleri, M. (2008), ‘The Ethical Task of the Translator in the Geo-Political Arena: From Iraq to Guantánamo Bay’, Translation Studies, 1 (2): 212–23.

112

The Bloomsbury Companion to Language Industry Studies

ISO 13611 (2014), Interpreting – Guidelines for Community Interpreting. International Standard, 1st edn, Geneva: ISO copyright office. Jones, R. (2014), ‘Interpreting: A Communication Profession in a World of NonCommunication’, The AIIC Webzine, 65. Available online: http://aiic.net/p/6990 (accessed 2 October 2017). Kalina, S. (2006), ‘Zur Dokumentation von Massnahmen der Qualitätssicherung beim Konferenzdolmetschen’, in C. Heine, K. Schubert and H. Gerzymisch-Arbogast (eds), Translation Theory and Methodology, Jahrbuch Übersetzen Und Dolmetschen, 253–68, Tübingen: Narr. Kalina, S. (2015), ‘Ethical Challenges in Different Interpreting Settings’, MonTI, special issue 2: 63–86. Leanza, Y. (2005), ‘Roles of Community Interpreters in Pediatrics as Seen by Interpreters, Physicians and Researchers’, Interpreting, 7 (2): 167–92. Leung Sin-Man, E. and J. Gibbons (2008), ‘Who is Responsible? Participant Roles in Legal Interpreting Cases’, Multilingua, 27: 177–91. Llewellyn-Jones, P. and R. G. Lee (2014), Redefining the Role of the Community Interpreter: The Concept of Role-Space, Lincoln: SLI Press. Massey, G. and R. Wieder (2019), ‘Quality Assurance in Translation and Corporate Communications: Exploring an Interdisciplinary Interface’, in E. Huertas Barros, S. Vandepitte and E. Iglesias Fernández (eds), Quality Assurance and Assessment Practices in Translation and Interpreting. Advances in Linguistics and Communication Studies Series, 57–87, Hershey: IGI Global. Mättää, S. K. (2015), ‘Interpreting the Discourse of Reporting: The Case of Screening Interviews with Asylum Seekers and Police Interviews in Finland’, Translation & Interpreting, 7 (3): 21–35. Mättää, S. K. (2017), ‘English as a Lingua Franca in Telephone Interpreting: Representations and Linguistic justice’, The Interpreters’ Newsletter, 22: 39–56. Mead, G. H. and C. W. Morris (1965), Mind, Self, and Society from the Standpoint of a Social Behaviorist, Chicago/London: University of Chicago Press. Merlini, R. (2015), ‘Empathy: A “Zone of Uncertainty” in Mediated Healthcare Practice’, Cultus, 8: 27–49. Merlini, R. and M. Gatti (2015), ‘Empathy in Healthcare Interpreting: Going beyond the Notion of Role’, The Interpreters’ Newsletter, 20: 139–160. Monacelli, C. (2009), Self-Preservation in Simultaneous Interpreting. Surviving the Role, Amsterdam: John Benjamins. Moser-Mercer, B. (2015), ‘Interpreting in Conflict Zones’, in H. Mikkelson and R. Jourdenais (eds), The Routledge Handbook of Interpreting, 302–16, London: Routledge. Nakane, I. (2009), ‘The Myth of an Invisible Mediator: An Australian Case Study of English-Japanese Police Interpreting’, Portal, 6 (1): 1–16. Niska, H. (2002), ‘Community Interpreter Training. Past, Present, Future’, in G. Garzone and M. Viezzi (eds), Interpreting in the 21st Century: Challenges and Opportunities, 133–44, Amsterdam/Philadelphia: John Benjamins.

Interpreters’ Roles and Responsibilities

113

Ozolins, U. (2014), ‘Rewriting the AUSIT Code of Ethics – Principles, Practice, Dispute’, Babel, 60 (3): 347–70. Ozolins, U. (2015), ‘Ethics and the Role of the Interpreter’, in H. Mikkelson and R. Jourdenais (eds), The Routledge Handbook of Interpreting, 319–36, London/ New York: Routledge. Ozolins, U. and S. B. Hale (2009), ‘Introduction. Quality in Interpreting: A Shared Responsibility’, in S. B. Hale, U. Ozolins and L. Stern (eds), The Critical Link 5. Quality in Interpreting – a Shared Responsibility, 1–10, Amsterdam: John Benjamins. Pöchhacker, F. (1994), Simultandolmetschen als komplexes Handeln, Tübingen: Gunter Narr. Pöchhacker, F. (2015), ‘Evolution of Interpreting Research’, in H. Mikkelson and R. Jourdenais (eds), The Routledge Handbook of Interpreting, 62–76, London/New York: Routledge. Pöchhacker, F. (2016), Introducing Interpreting Studies, 2nd edn, Amsterdam/New York: John Benjamins. Pöchhacker, F. and W. Kolb (2009), ‘Interpreting for the Record: A Case Study of Asylum Review Hearings’, in S. B. Hale, U. Ozolins and L. Stern (eds), The Critical Link 5. Quality in Interpreting – a Shared Responsibility, 119–34, Amsterdam: John Benjamins. Pöllabauer, S. (2007), ‘Interpreting in Asylum Hearings: Issues of Saving Face’, in C. Wadensjö, B. Englund Dimitrova and A.-L. Nilsson (eds), The Critical Link 4. Professionalisation of Interpreting in the Community. Selected Papers from the 4th International Conference on Interpreting in Legal, Health and Social Service Settings, Stockholm, Sweden, 20–23 May 2004, 39–52, Amsterdam: John Benjamins. Prunč, E. (2012), Entwicklungslinien der Translationswissenschaft, Berlin: Frank & Timme. Reithofer, K. (2013), ‘Comparing Modes of Communication. The Effect of English as a Lingua Franca vs. Interpreting’, Interpreting, 15 (1): 48–73. Robbins, P. and M. Aydede, eds (2009), The Cambridge Handbook of Situated Cognition, Cambridge: Cambridge University Press. Roy, C. B. (2000), Interpreting as a Discourse Process, New York/Oxford: Oxford University Press. Sasso, A. (2017), ‘A New Paradigm for Language Services: Occupy the Marketplace’, Plenary Presentation, 6th International Conference on PSIT/Community Interpreting and Translation, Universidad de Alcalá, Alcalá de Henares, Spain. Available online: https://criticallink.org/2017-8-9-a-new-paradigm-for-languageservices-occupy-the-marketplace/ (accessed 7 April 2018). Schäffner, C., ed. (1999), Translation and Norms, Clevedon: Multilingual Matters. Shlesinger, M. (1989), ‘Extending the Theory of Translation to Interpretation: Norms as a Case in Point’, Target, 1 (1): 111–5. Sleptsova, M., G. Hofer, M. Eggler, P. Grossman, N. Morina, M. Schick, M.-L. Daly, I. Weber, O. Kocagöncü and W. A. Langewitz (2015), ‘Wie verstehen Dolmetscher ihre Rolle in medizinischen Konsultationen und wie verhalten sie sich konkret in der Praxis?’, Psychotherapie, Psychosomatik und Medizinische Psychologie, 65: 363–9.

114

The Bloomsbury Companion to Language Industry Studies

Tate, G. and G. H. Turner (2002), ‘The Code and the Culture: Sign Language Interpreting – In Search of the New Breeds Ethics’, in F. Pöchhacker and M. Shlesinger (eds), The Interpreting Studies Reader, 372–83, London/New York: Routledge. Tripepi Winteringham, S. (2012), ‘English for Special Purposes Used by and for Non-native English-Speaking Interlocutors: The Interpreter’s Role and Responsibility’, in C. J. Kellett Bidoli (ed.), Interpreting across Genres: Multiple Research Perspectives, 141–51, Trieste: Edizioni Università di Trieste. Turner, G. (2007), ‘Professionalisation of Interpreting with the Community: Refining the Model’, in C. Wadensjö, B. Englund Dimitrova and A.-L. Nilsson (eds), The Critical Link 4. Professionalisation of Interpreting in the Community. Selected Papers from the 4th International Conference on Interpreting in Legal, Health and Social Service Settings, Stockholm, Sweden, 20–23 May 2004, 181–92, Amsterdam: John Benjamins. Van Vaerenbergh, L. (2019), ‘Ethics and Good Practice in Interpreting (Mental) Health Care’, in C. Hohenstein and M. Lévy-Tödter (eds), Multilingual Healthcare: A Global View on Communicative Challenges, Heidelberg/Berlin, Springer (forthcoming). Wadensjö, C. O. (1995), ‘Dialogue Interpreting and the Distribution of Responsibility’, Hermes: Journal of Language and Communication Studies, 14: 111–29. Wadensjö, C. O. (1998), Interpreting as Interaction, London/New York: Longman. Zwischenberger, C. (2015), ‘Simultaneous Conference Interpreting and a Supernorm That Governs It All’, Meta, 60 (1): 90–111. Zwischenberger, C. (2017), ‘Professional Self-Perception of the Social Role of Conference Interpreters’, in M. Biagini, M. S. Boyd and C. Monacelli (eds), The Changing Role of the Interpreter – Contextualising Norms, Ethics and Quality Standards, 52–73, London/New York: Routledge.

6

Non-professional interpreting and translation (NPIT) Claudia Angelelli

1. Introduction The term ‘non-professional interpreter/translator’ (NPIT) refers to an occupational group whose members are perceived as not possessing the required qualifications or skills to do the job generally performed by professional interpreters/translators. ‘Professional interpreters/translators’ can be defined as individuals who (1) hold a degree or certification, and (2) earn a living performing translation/interpreting (T&I), or both. In reality, however, defining professionalism in translation or interpreting is, in fact, much more complex (Angelelli 2005). We return to this point in Section 3. In addition to ‘non-professional’, there are several other terms (modifiers) used to refer to this occupational group, including ad hoc, bilingual, volunteer, untrained, noncertified, native, naïve or natural translators/interpreters1. Terms used to refer to NPITs carry different connotations. For example, qualifiers such as ‘naïve’, ‘untrained’ or ‘non-certified’ suggest that these individuals have had no formal education in T&I and/or have not obtained certification. The qualifier ‘volunteer’ implies that the person is not receiving payment in return for the services rendered (Evrin and Meyer 2016). The use of ‘ad hoc’ suggests a person who may not be qualified and may not engage in T&I frequently. In contrast, in healthcare settings, these NPITs can be called ‘dualrole translators/interpreters’. ‘Dual role’ in this context means that staff members working in a healthcare organization (e.g. as a lab technician or a nurse), and who are bilingual, are asked to perform T&I tasks for patients with whom they share a language (Angelelli 2004b). This is in addition to performing the work

116

The Bloomsbury Companion to Language Industry Studies

for which they are hired (Hlavac 2017). Another qualifier used in healthcare is ‘lay’ (Hlavac 2011). While the term ‘native’, coined by Toury (1984), refers to individuals who, having had no formal education in T&I, managed to pick up the skills to translate like professionals, calling someone a ‘natural translator/interpreter’ (Harris and Sherwood 1978; Harris 2009) alludes to the perceived natural ability a bilingual person may have to translate or interpret (see further discussion in Section 2). This perception is at the root of the long-standing debate guided by questions such as ‘Are translators/interpreters born or made?’, or ‘Is it necessary to teach translation/interpreting to bilinguals?’ These questions are closely related to the topic of NPITs. Thus, the term ‘bilingual’ is also used to refer to NPITs. Research, however, has demonstrated unequivocally that translation and interpreting are not by-products of bilingualism, and that not all bilinguals can successfully translate or interpret at a non-professional or professional level. Bilinguals are not identical and they cannot be subsumed under a single standard (Valdés and Figueroa 1994: 7). Bilingualism is not a monolithic construct and the challenges and opportunities that result from overlooking this fact are especially salient in the making or hiring of translators, interpreters or any type of language broker/ mediator (Angelelli 2004b, 2010). The generalization that having bilingual ability equates with having ability to translate or interpret is one specific area in which further collaboration between industry and research could be beneficial for all. We return to this point in Sections 2 and 3. Today, as in the past, geographic displacement of large or small groups of people (or even individuals) has linguistic consequences. Whether it is due to political upheavals, natural disasters, migration, re-location, trade, commerce, tourism or the need for education, health, languages in contact and the human need to communicate have always called for language support to enable communication among people who do not share languages. Today, unlike in the past, technological advancement and globalization have increased the ability of diverse language users to communicate remotely. As a result, this increase in communication has yielded a vast amount of content to be translated/interpreted, far surpassing the bandwidth of professional translators/interpreters. As a result, the need for reliable human or machinelanguage support in the form of translation, interpreting and mediation in general has increased. Currently, linguistically diverse communicative needs are met with varying degrees of success by research and industry working together and/or separately.

Non-professional Interpreting and Translation (NPIT)

117

Some examples of successful undertakings between researchers and industry can be grouped around the following issues: 1. testing and assessment identification and testing of skills or aptitudes; design of measurement instruments (e.g. tests to certify or to hire translators or interpreters; validation studies of existing tests); design and/ or implementation of more robust and reliable ways of testing abilities (e.g. Angelelli and Jacobson 2009; Stansfield and Hewitt 2005; Sawyer et al. 2002) 2. teaching and learning material design: enhancing the authenticity of teaching materials by using naturalistic data; targeting industry needs with tailor-made materials (e.g. the use of authentic recorded interactions for interpreting or machine output to teach/test) (Winston and Molinowski 2013; Colina and Angelelli 2015; Angelelli 2006; SHIFT project2) 3. development of technologies that enhance the translator/interpreter working environment (e.g. commercial and non-commercial translation memory software) 4. development of technologies that overlap with human translator/ interpreter output to varying degrees of accuracy (e.g. machine-translation software and voice-recognition telephone applications) 5. Development of international standards (e.g. ISO 13611: 2014 Guidelines for community/public-service interpreting; ISO 20109: 2016 Simultaneous Interpreting Equipment: Requirements and ISO 18587: 2017 Translation Services: Requirements. Post-editing of machine-translation output; and ISO 11669: 2012 Translation Projects: General Guidance) and of policy (e.g. British Sign Language Scotland Bill). The success of these projects partly lies in the sharing of expertise and data between academic researchers and industry. It would have been more difficult for either industry or academic researchers to have succeeded without each other. However, to date, industry and research have not had the opportunity to discuss and work together directly on NPITs, their expertise, their needs for professional development, the quality of their work and its impact on the communicative needs of a linguistically diverse population. The truth is that we do not have substantial data about NPITs’ participation in the marketplace. We also do not know if, in situations where NPITs perform as translators or interpreters (whether it is in a translation agency, an NGO, a telephone company, a school, a store or a hospital), they work as freelancers or staff members, or if they work part-time or full time. In addition, in most cases, we do not know

118

The Bloomsbury Companion to Language Industry Studies

what is required of NPITs to perform as translators or interpreters. What skills and knowledge do they have to exhibit in order to receive an assignment or get a job? Are their skills and knowledge measured or taken at face value? Undoubtedly, we do not know who – industry or academic researchers – is better prepared/positioned to gather and analyse these data. What we do know is that by exchanging information, sharing data or expertise, and by creating partnerships between industry and academic researchers, the probability of finding more and better answers to these and other questions will increase. Today, as in the past, societies are not able to meet the increasing communicative needs of linguistically diverse people. Today, unlike in the past, we know more about quality, ethics and responsibility of language service provision, be it in the form of translation, interpreting or other forms of cultural/language mediation/brokering. More specifically, we know that there are different expectations/benchmarks of quality, which would seem to open the door for NPITs, not unlike the fashion in which the door has been opened for machine translation. In addition, today we have more technological developments to meet human communicative needs across languages and cultures. Furthermore, the area of inquiry of NPIT is no longer perceived as the poor relative of translation and interpreting studies (Antonini et al. 2017: 2). Additionally, as stated by Evrin and Meyer (2016: 1) in their opening editorial of the special issue on Non-professional Interpreting and Translation of the European Journal of Applied Linguistics: ‘Over the last ten years, nonprofessional interpreting and translation – perceived as the study of unpaid translation practices – has become a field of research in its own right.’ All of the above means that today, more than ever, it is imperative to enhance existing collaboration between academic research and industry. In this way, we can build on our strengths as we complement each other. The presence of NPITs in the translation/interpreting sector is an important issue that cannot be taken lightly. It impacts industry and clients/users at all levels. It also affects the professional translators’ and interpreters’ workforce. We should not turn a blind eye to the presence of NPITs in the marketplace. Instead, we should enhance our collaboration to make sure we have a shared understanding of, and a commitment to guaranteeing, appropriate levels of expertise, feasibility and allocation of tasks for the diverse levels of expertise, quality of outputs and services, levels of compensation and, above all, ethics and responsibilities towards end users, workers and clients. By increasing collaboration in understanding and addressing these issues, we can find better solutions to communicative problems. These solutions will benefit all stakeholders.

Non-professional Interpreting and Translation (NPIT)

119

2.  Research focal points In this section, we situate NPIT research in the academic landscape by presenting a brief overview of relevant research findings on specific areas and speculate about pertinent future directions of further research in NPIT. The study of NPIT, having achieved the status of a field of inquiry in its own right (Evrin and Meyer 2016; Antonini et al. 2017), has shared similar struggles as other emerging fields, such as community interpreting (see, for example, Roberts 1994). In addition to being incipient, the field of inquiry on NPIT is interdisciplinary. Research findings from bilingualism (Valdés and Figueroa 1994) and bilinguality (Hamers and Blanc 2000), cognitive psychology (Malakoff and Hakuta 1991; Bialystok and Hakuta 1999), cultural/area studies (Orellana 2003; Buriel, Love and De Ment 2006), education (Valdés, Chavez and Angelelli 2000), sociolinguistics (Zentella 1997; Valdés and Angelelli 2003), and translation and interpreting studies (Angelelli 2010; Colina 2008; Napier 2017), to name just a few, have been central to our understanding of the knowledge, skills and limitations found in NPITs. They have contributed immensely to the construction of NPIT as a field of inquiry in its own right. When scholars from the various disciplines mentioned above conducted their studies, their goal was not necessarily to study NPITs as much as it was to study another phenomenon within which facets of what is now defined as nonprofessional interpreting and translation occurred. In so doing, they described communicative situations facilitated by NPITs without making NPITs the focus of their research (Evrin and Meyer 2016: 1).

2.1.  NPITs and bilingualism Being a translator/interpreter and a bilingual is not one and the same. While all translators/interpreters are bilinguals, not all bilinguals are translators or interpreters. Translation and interpreting call for acquired knowledge, abilities and skills. The interaction of acquired knowledge abilities and skills is clearly described in existing competence models, such as PACTE (2008) or the EMT models (Chodkiewicz 2012). Being bilingual does not necessarily encompass the ability to process and analyse information, or render it appropriately while monitoring one’s own production. While research has unequivocally proven that translation or interpreting are not by-products of bilingualism, and that translators and interpreters are made rather than born (Valdés, Chávez and Angelelli 2000), the folk belief to the contrary seems to persist. This is a problem that needs

120

The Bloomsbury Companion to Language Industry Studies

to be addressed, especially as we are mindful of the risks of taking abilities at face value. When studying bilingualism and T&I, Angelelli (2010) discusses two types of bilinguals, ‘elective’ and ‘circumstantial’, based on Valdés and Figueroa’s typology (1994: 12), taking part in T&I classes or in the workplace. The differences between these types of bilinguals are important. The elective bilingual generally becomes one through formal education (e.g. by taking language courses and studying language and culture), thus becoming literate in the language of choice. This does not rule out the fact that elective bilingualism may also be the consequence of elective exposure (e.g. travelling or spending time in the country where the other (elective) language is spoken) and/or, in fewer cases, of being brought up in a household with two languages and schooled in both. A circumstantial bilingual does not choose to learn a language. Rather, as a result of life circumstances, such as migration, displacement due to natural disasters or political upheaval, circumstantial bilinguals learn a new language in order to survive and fully participate in the new society, as they find themselves in a new place where their first language is no longer enough to meet their communicative needs (Hua and Costigan 2012; Orellana, Dorner and Pulido 2003; Valdés, Chávez and Angelelli 2000). Circumstantial bilinguals differ in their language abilities of the two languages. Some have been schooled in both languages, as a result of joining the host society at an age when they were already literate in their first language, and, in the language of their host country, they continue to develop language repertoires required in work-related situations. Others may not pursue further studies in their heritage language once they are in the host country, and therefore, their heritage language repertoire may be more limited (Valdés and Geoffrion-Vinci 1998). Beyond the conditions in which languages are acquired, important differences between elective and circumstantial bilinguals are evidenced by the relationship between the bilinguals and the rest of society. An elective bilingual becomes bilingual as an individual. Circumstantial bilinguals are usually members of a group of people who must become bilingual in order to be part of, and take part in, the society around them. Thus, it is not by chance that circumstantial bilinguals who are schooled in the language of the host society and succeed in retaining their heritage language grow up perceiving their bilingualism and bilinguality (Hamers and Blanc 2000) as a vehicle to help other members of their community to communicate. They see themselves as being in a position to provide linguistic access to, and advocate for, those members who do not have linguistic access and, thus, cannot advocate for themselves due to a language barrier.

Non-professional Interpreting and Translation (NPIT)

121

In addition, these language brokers, as they mediate the interaction between members of communities that have come into cultural contact (Alvarez 2014; Angelelli 2016, 2010; Auer and Lei 2007; Borrero 2006; Tse 1996), are engaged in a practice that is very different from that carried out by ‘ordinary’ bilinguals3 (Valdés and Angelelli 2003). While ordinary bilinguals choose one or the other of their two languages to communicate, depending on a complex set of factors including setting, interlocutors, role relationships, topics, situations and the like, bilinguals engaged in language brokering (T&I or mediation) do not choose. In other words, when bilinguals are talking to someone (rather than interpreting to/for someone), they address a person in the way they choose (e.g. using formal or casual language, converging or diverging), they are aware of their role vis-à-vis the other person, the existing power differentials between them, if any (they may decide to show more or less respectful language to a senior member of the family, or a line manager at work, than to a peer, for example). They decide in which of the two languages they will start a conversation; it is not decided for them. They decide what they want to say, to answer or not to answer. Instead, when bilinguals are interpreting for another person they cannot do any of this. They have to follow the choices made by the monolingual interlocutors. They have to accommodate their language, linguistic repertoire, role and so on to what the communicative situation requires of them. Since not all bilinguals are alike, and since different types of bilinguals populate translation and interpreting workplaces as well as classrooms (Angelelli 2010; Hlavac 2017), it is essential to bear the differences based on the type of bilingualism in mind when identifying, training/educating or assessing NPITs. Their different backgrounds, life and linguistic experiences, as well as their expectations and perceptions of their roles and of their own abilities may impact their performance at the workplace (Angelelli 2004a, 2010).

2.2.  NPITs and sociological issues Until the sociological turn in translation and interpreting studies (Wolf and Fukari 2007; Angelelli 2010), there had been little discussion on translators and interpreters as an occupational group (Sela-Sheffy and Sclesinger 2009, 2010). In addition, similarly to the discussion on bilingualism and NPITs above, discussions on status, identity, professionalism and role, while not studying NPITs specifically, have nonetheless revealed important information on this group.

122

The Bloomsbury Companion to Language Industry Studies

Certification, status, professionalism and professional identity were studied by various researchers from different viewpoints, methodologies and degrees of scientific rigour. To investigate the point of view of recruiters, Chan (2009) conducted an experiment involving the use of fictitious resumes, followed by interviews with eight translation recruiters and critical discourse analysis of CVs. Results show recruiters base their decision on possession of a university degree (i.e. formal educational qualifications) and of relevant work experience. Furthermore, they view translation certification as an ‘add-on’ and prefer an academic degree to translator certification (2009: 154). This perception aligns with statements such as ‘only a few individuals can perform interpreting tasks without education in the field’ (Gile 1995). While there is no intention to challenge the importance of education, results of empirical studies paint a more diverse picture. In the context of research to design and administer IPRIs (Interpreters’ Interpersonal Role Inventory), a valid and reliable psychometric instrument to measure interpreters’ perceptions of their role and beliefs of their own practice, Angelelli (2004b: 67) found the following: ‘Of the 293 (100%) participating interpreters working actively in Canada, Mexico and the United States, 14% attended certification/course/programs and 13% attended graduate/course programs in translation/interpreting. The remaining 73% reported having participated in less-formal types of educational opportunities such as workshops or on-the-job training.’ These results obtained from random and stratified sampling indicate that the majority of the participants interpreting in hospitals, courts of law and business-related conferences in these three countries vary widely in their level of T&I-specific educational background. Based on the classifications described in Section 1, these respondents are NPITs. These findings highlight the blurry lines distinguishing the non-professional from the professional. This may, at least in part, explain the neglect of research on NPITs per se. A replication of this study was conducted in hospitals in Switzerland and findings show a similar trend (Albl-Mikasa et al. 2015). The same holds true for Australia in rural areas where indigenous languages prevail (Angelelli and Slatyer 2012). This raises the question not only about quality or qualifications but also about the presence or absence of educational opportunities for NPITs who engage in T&I as their main occupation. We return to this point in Section 3. In addition to studies on the role of interpreters (professionals and nonprofessionals) in North America (Angelelli 2004b), role and professionalism have been studied in Japan by Torikai (2010). Utilizing historiography, her study

Non-professional Interpreting and Translation (NPIT)

123

focused on diplomatic and conference interpreters in post during the Second World War. She found that in the narratives of life-story interviews, interpreters saw themselves as essential participants in intercultural communication. In their research on sign language, Swabey and Gajewski Mickelson (2008) provide a perspective of forty years in the profession in the United States. Their analysis includes an overview of the various ways interpreters have been viewed in the field, including helper, conduit, communication facilitator, bilingual-bicultural specialist and co-participant. Also focusing on sign language interpreting, Grbic (2010) uses boundary work to discuss the construction of professionals and non-professionals and the differences between these two groups, while Bahadir (2010: 122) discusses the role of interpreters in settings like detention camps, refugee camps and prisons. She reminds us of the close connection between professionalism, power and control by stating that ‘interpreters play a participant role in the interplay of power as an active performer. Their gaze disrupts and their voice intervenes.’ Studying translators’ and interpreters’ perceptions of their working world, their mindset and the impact of university education (translation studies) on their world, Katan (2009) surveyed translators/interpreters with an online survey. No information on the validity or reliability of the instrument is available, and, while the author does not make scientific claims based on the sampling (a convenience sample mostly from Europe), patterns of responses and comments seem to suggest that there is a shared view of a translator’s world in which university training has had little impact, and this group of respondents have relatively little interest in the university itself in comparison with life-long learning, with most emphasis placed on practice and self-development. Members of the group feel themselves to be ‘professional’ due to their specialized knowledge and abilities. However, their professionalism is mainly limited to their responsibility to the text itself, and there is relatively little interest in the wider context. (Katan 2009: 187)

Chan’s results on the perceived benefits of certification to the profession (regarded as an add-on) discussed above are similar to those obtained by Setton and Liangliang in their work on perceptions of role, status and professional identity of translators and interpreters who have Chinese in their language combination in Shangai and Taipei (2009).

2.3.  NPITs and quality The issue of quality and NPITs (again, not referred to by this name), along with the issue of access to quality language provision available to speakers of non-societal

124

The Bloomsbury Companion to Language Industry Studies

languages in a multilingual society, has been at the centre of many debates. Evidence of this is (1) the existence of legislation banning the use of children as family ‘translators’ in healthcare institutions in the United States (Yee, Diaz and Spitzer 2003) and (2) the increase of professional and academic publications reporting on the use of NPITs (although not using this label) in hospitals (Cambridge 1999; Flores et al. 2003; Marcus 2003), as well as in healthcare (Meyer 2012; Baraldi and Gavioli 2012), educational (Orellana and Reynolds 2008; Valdés 2003), police (Angelelli 2015; Berk-Seligson 2011) and legal settings. A professional concern is to deliver quality services. Quality in translation or interpreting products has been a concern in translation and interpreting studies. As stated in previous sub-sections, studies on quality have not focused directly on NPITs either. We learn about NPIT by taking a closer look at the participants and their qualifications in specific studies and settings. Among studies on quality that have focused on the perspective of clients in medical settings, we identify the participation of NPITs in García, Roy and Okada 2004; Lee et al. 2002, 2006; and Morales et al. 1999, to name just a few. Other studies on increasing productivity or on the advantages or disadvantages of technological changes in translator/ interpreter work habits have also reported on the presence of lay (Hlavac 2011) or voluntary translators and interpreters. As evident from this brief review, contributions from research to the industry are many and they vary in their focus. At the macro level, studies on beliefs and perceptions have shed light on translators’ and interpreters’ agency and role. At the micro level, studies of register, speech markers, reported speech and pauses conducted on monolingual interactions are managed in remote bilingual interpreter-mediated interactions (Braun 2012). In addition to eliciting differences between monolingual and interlingual remote discourse, research has unequivocally shown that interactions are influenced by the setting (e.g. a school, a court of law, a hospital or a business meeting), as interpreting is a situated practice (Angelelli 2008), and the participants’ and the interpreters’ conversational behaviour changes in remote discourse (Braun 2012; Angelelli 2004a).

3.  Informing research through the industry As previously discussed, the need for and presence of NPITs in translation or interpreting agencies/companies, in healthcare organizations, schools or social services offices and in companies offering remote interpreting highlights the

Non-professional Interpreting and Translation (NPIT)

125

urgency to redesign translation and interpreting curricula to accommodate the educational needs of both translators/interpreters. In many areas of the world in which the population is extremely diverse, where diversity has grown more rapidly than the development of educational programmes to prepare for T&I in certain sectors (e.g. community, educational and healthcare), or where educational programmes are not available for certain language combinations, organizations as well as individuals who want to put their talents to serve the needs of linguistic minorities have grappled with various important questions: How does one become a professional translator/interpreter if one’s language combination is not taught/certified? Where do individuals who want to work to help linguistic minorities communicate with speakers of the societal language get their education? What makes a translator a professional if education is not available? Or, what if the education available is only for conference interpreting? Is it simply a matter of experience in the field? What is the difference between a gifted bilingual and a professional interpreter? Is it education in the field, or is it just membership in a professional organization? Can passing a test guarantee professionalism? Or is a professional an individual with a degree who can demand higher fees? (Angelelli 2005) Curriculum review is a healthy exercise performed frequently at universities in order to address changes (e.g. driven by technological advances or by population or market needs), as well as supply and demand in workplaces. This is an area where industry can inform not only academic research but also academic education and training. If the curriculum review process could be industry informed,4 then, together, industry and academia could decide on the feasibility of research methods, such as site observations, interviews, case studies and even ethnographies (see, for example, Asare 2015 for translation and Angelelli 2004a for medical interpreting). In the context of the increase of NPITs in the workplace, the focus of a curriculum review means not only changing to prepare students to work with different linguistic groups or different technologies, but, specifically, also accounting for a student population that is not the mainstream elective bilingual with no previous experience now pursuing a career in translation, interpreting or language mediation. Catering to NPITs’ needs requires developing specific executive alternatives to educate for a specific purpose. NPITs constitute a specific case of bilinguals performing a task for which they may not have the knowledge or on which they might not have had the opportunity to reflect. So, an educational programme tailored to NPITs means educating them not only on skills and tools but, most importantly, on how they can identify their own strengths and limitations, learning strategies

126

The Bloomsbury Companion to Language Industry Studies

and techniques, rather than ‘training’ them as if one size fits all. This is another area in which industry could inform academia as to the specific competencies on which academia should focus to cater to NPITs’ educational/training needs. By conducting focused observations on tasks and documenting the reality of NPITs at work, industry could develop, or help to develop, an empirically based competence model for NPITs. Various models of existing knowledge transfer (e.g. a service-learning component to existing courses or internships) have been productive for T&I students and for informing researchers on new trends to be studied. Many of these had NPITs at their core. For example, having university students participate in real-life situations as they translate or interpret under supervised conditions in NGOs or observing interactions in legal or healthcare organizations has helped them understand professionalism and call into question non-professional behaviours. Industry could play an even more central role in this by, for example, documenting tasks which require different degrees of expertise. At the same time, these models of service-learning or internships have helped students identify behaviours of managers/supervisors in the organizations, as they deal with languages of limited diffusion for which, generally, there is a gap in educational possibilities. In addition, when managers coach NPITs to perform a task with tight deadlines, or when a team is gathered together but not much time/attention goes into consistency across languages, conceptualizations of quality need to be revisited. In contexts of NPITs’ on-the-job training, it would be important if industry could provide researchers with realistic gauges of quality (and productivity) if the translators/interpreters performing are, in fact, non-professionals. When students have internships in organizations like banks, government institutions and healthcare facilities, they learn many of the tasks that are related to translation and interpreting which, given curriculum constraints, cannot always be discussed in the classroom. From organizing events in which interpreting would be necessary, writing speech drafts and anticipating questions, to editing social media messages, students learn an array of T&I-related tasks, as they also learn the value of time management and team work, as well as working when up against real deadlines in an authentic setting. Students report observations and discuss concerns with advisers and supervisors. Among these, students analyse the differences they see among their peers who are schooled in T&I, or who have language proficiency in a given combination, but a different field of study (e.g. organizational behaviour or human-resource management), and who are asked to perform T&I tasks. If industry could gather and share this data with academia,

Non-professional Interpreting and Translation (NPIT)

127

then we could empirically operationalize the fields in which NPITs are involved and analyse skills required. Discussions between on-site supervisors and student placement advisers generate ideas that result in projects that may bring industry and research together. Sometimes these projects lead to impact studies/cases. Together, industry and research can work to find solutions to problems, such as how to monitor quality as productivity is enhanced, and vice versa. This is something that academic researchers are equipped to do in collaboration with industry based on existing approaches and models. While the model of internships or service-learning components to a course has proven to be very helpful in bringing reality into the classroom for T&I students, as well as in providing translation/interpreting to sectors of the population that otherwise may not have afforded it, these opportunities have helped some, but not all. In other words, industry may have gained from getting a service offered, the end user may have gained from his/her own need being addressed and the T&I students may have gained as well from exposure to an authentic task at the workplace. If, however, the quality of the service is questionable, or, if there is no opportunity to reflect on what can be improved, how have the NPITs gained from this working experience? Unlike the T&I student who has plenty of opportunities to learn and reflect, the NPIT does not have an educational component to allow for activities such as critical reflection, self-monitoring of performance, error analysis, and so forth to support his/her practice. This is one among the many questions that need to be asked by both industry and academic researchers, as the education of NPITs, whether it is online or in executive mode, should not be neglected. As a result of collaborative projects between research and industry, we have come to understand a little better the contexts in which NPITs work. Sometimes they fill a gap, other times they are in competition with professional translators/ interpreters. When this occurs, we still do not have information to understand if competition is due to lower compensation received by NPITs (e.g. in professional meetings as well as in meetings specifically focusing on NPITs such as NPIT 2014 and 2016, it is not unusual to hear angry T&I professionals disqualifying an occupational group perceived as cutting down market conditions), or to the fact that NPITs may accept to perform in a way that, even when it may increase productivity, no information is sought as to how an increase in productivity relates to quality increase or decrease. It has been suggested that NPITs may work longer hours with inappropriate breaks. As a result of having less downtime, attention decreases. As attention decreases, performance decreases to the point at which it may jeopardize quality unintentionally. It is therefore of

128

The Bloomsbury Companion to Language Industry Studies

ultimate importance to find out empirically (beyond perceptions) if these issues attributed to NPITs are perceived or real and act upon findings. For researchers, information is also necessary about the types of issues clients and end users face when quality is jeopardized or services cannot be delivered. So far, we only have access to information about NPITs’ performance when something goes wrong (e.g. Flores et al. 2003; García, Roy and Okada 2004; Smith 2013). Representatives of industry (agencies hiring NPITs and NPITs themselves, whether they work as freelancers or staff members) generally dominate discussions on standards of practice or codes of ethics, as they have the means and time to participate in them. Therefore, at times, important discussions tend to be informed mostly by individual experiences on current issues in specific settings/ matters (e.g. availability or non-availability of professionals for specific language combinations; non-existence of educational opportunities; and expertise in post-editing and use of technology in court houses). A statement from a recent ISO standard related to requirements to perform as a community/public-service interpreter illustrates this issue. Like in many codes of ethics, statements made from theoretical assumptions may, at times, be in sharp contrast with reality. In this particular example, we can see how the interpreter service provider (ISP) needs to verify evidence of at least one of the following criteria before hiring an interpreter: a) a recognized degree (e.g. BA or BSc) in interpreting from an institution of higher education or a recognized educational certificate in community interpreting; b) a recognized degree in any other field from an institution of higher education plus two years of continuous experience in community interpreting or a relevant certificate from a recognized institution; c) a certificate of competence in interpreting awarded by an appropriate government body or government-accredited body for this field, and proof of further qualifications or experience in community interpreting; and d) five years of continuous experience in community interpreting in cases where (a) to (c) cannot reasonably be met (ISO 13611-2014). As the reader can see, however, item (d) relies entirely on the assumption that NPITs’ perceived language ability and experience is enough to perform interpreting assignments. This means that interpreting ability is taken at face value, as a by-product of bilingualism. It also means that a person’s language proficiency in both languages is also taken at face value.

Non-professional Interpreting and Translation (NPIT)

129

To sum up, to date we do not have reliable information on NPITs working in the language industry. We do not have solid data on their educational or linguistic background, their responsibilities or remuneration. Mostly, we have speculations and personal anecdotes. The starting point of a curriculum development for specific purposes requires a thorough learner needs analysis as well as a market analysis. Given that NPITs are all different in their previous experiences as well as in their abilities, the more information we can gather from them, the better specific courses or programmes for in-service professional development could be designed/put into place to meet their needs. This would benefit all stakeholders. Evidently, a serious discussion on translation and interpreting education between industry and research is long overdue. Confusing education with training will not take us far.

4.  Informing the industry through research As discussed in Section 2, research has shed light on several challenges and opportunities for accounting for or ignoring the presence of NPITs in the workplace. Findings from studies in both translation and interpreting show evidence of the mix of elective and circumstantial bilinguals populating work environments (e.g. in translation, Perez-Gonzalez and Susam-Sarajeva 2012; in legal interpreting, Angelelli 2015; Berk-Seligson 2010; in community/family interpreting, Angelelli 2016; Antonini et al. 2017; Flynn and van Doorslaer 2016; in education, Cirillo 2017; Valdés, Chávez and Angelelli 2000; in medicine, Angelelli 2004; Baraldi and Gavioli 2017; Davidson 2000; Martinez-Gomez 2016; Meyer 2012; Morales et al. 1999). These differences are important and employers or agencies need to be mindful of them. Translation and interpreting are not by-products of bilingualism. Therefore, bilinguals hired to perform T&I tasks may require educational support beyond on-the-job training on how to use equipment or how to use their voice in telephone interpreting or their gaze in video remote interpreting. Different types of bilinguals have had different experiences, especially in their understanding of professionalism and role. Academic research has a unique role to play in informing industry, in raising awareness on the different types and categories of bilinguals and on how these may impact the work they do. NPITs bring with them (as we all do) different life experiences related to both language use (formal or informal register, language varieties) and to language brokering

130

The Bloomsbury Companion to Language Industry Studies

for their family members or friends. Their upbringings greatly impact how they understand different situations and shape their expectations as they provide (or receive) a service. Different conceptualizations, for example, of what service is (whether it is helping others, brokering, exercising or not exercising agency), or of what an interpreter does or does not do (e.g. setting or not setting boundaries between the professional and the personal roles) is key to understanding beliefs and behaviours of professional and non-professional translators and interpreters (Angelelli 2004). These issues, once again, beg the question, who is in a better position to explain the consequences of overlooking beliefs and behaviours of professional/NPITs? This is an area in which academic research could create a concrete set of guidelines resulting from empirical studies, as a way of informing the industry on these. And, working in tandem, industry could provide further data on NPITs’ behaviours and beliefs identified while performing specific tasks. Specifically, in the area of circumstantial bilinguals performing T&I tasks, research findings have contributed to identifying, assessing and teaching to the talents of those gifted circumstantial bilinguals, who, with the appropriate educational support, could be successful in pursuing a career in language brokering, translation or interpreting (e.g. Valdés and Angelelli 2003; Angelelli, Enright and Valdés 2003). Several joint projects between industry and research have been successful in implementing on-site specific educational opportunities for a bilingual task force engaging in language services. Based on their joint collaboration with Network Omni Multilingual Communications in the United States, Sawyer et al. (2002) report on the importance of valid and reliable testing and development of teaching materials for telephone interpreting. Focusing on both remote and face-to-face healthcare interpreting for Spanish, Hmong, Cantonese and English, Angelelli (2007) also reports on the design and piloting of an empirically driven audio- and video-based test to measure language proficiency and interpreter readiness to work in healthcare settings. This project was followed by a video medical interpreting event over 3G cellular networks in Massachusetts which also brought together researchers and industry representatives. In Europe, collaborative projects, such as AVIDICUS and SHIFT, constitute examples of joint efforts between research and academia. AVIDICUS looks at the increasing use of video-mediated interpreting to reveal the strengths and weaknesses of existing arrangements. The use of technology in criminal justice contexts can only be of best value if the elements involved can be consistently relied upon. This includes the legal service interlocutors, the interpreters or

Non-professional Interpreting and Translation (NPIT)

131

translators and the technology. AVIDICUS clearly shows that many of the issues surrounding legal videoconferencing are exacerbated when cultural and linguistic barriers (and, thereby, an interpreter) are added to the technological mediation of the communication via videoconference link. SHIFT, aimed at enhancing the education of interpreters working remotely, brings together researchers and industry to analyse verbal and non-verbal features of remote communication. By contributing to the body of knowledge already produced in remote and on-site interpreting, SHIFT provides further examples of remote interpreting. Using authentic interactions to study discourse features, SHIFT uncovers the challenges and opportunities of remote interpreting along the lines of AVIDICUS. The analysis of naturally occurring data feeds into the design and content of teaching materials. It is important to point out that it is also non-professionals who often take on these tasks. These materials provide more focused educational opportunities for interpreters working over the telephone or via teleconference. By having a more qualified workforce, interpreting service providers, in turn, will benefit and this will also benefit end users of interpreting services as they will receive a better quality of service.

5.  Concluding remarks In this chapter, we have addressed the interaction between research and industry related to non-professional translators and interpreters. We have reviewed the terms used to refer to NPITs in the workforce, discussed the tension between bilingualism and T&I ability and looked at issues of identity, status, professionalism and certification. Ultimately, it is the joint and collaborative efforts between industry and research described here that can advance our understanding of NPITs and their role in language service provision.

Notes 1 For a historical review of research in the NPIT field, see Antonini et al. 2017: 4–6. 2 https://www.shiftinorality.eu 3 Valdés and Angelelli (2003) use the term ‘ordinary’ bilinguals for individuals who acquire their two languages in bilingual communities and who alternately use their two languages to interact with members of that community. The literature on

132

The Bloomsbury Companion to Language Industry Studies

bilingualism has referred to such bilinguals as folk bilinguals, natural bilinguals and, more recently, circumstantial bilinguals. 4 Industry and academia coming together are service-learning components to translation/interpreting courses, internships and apprenticeship projects. See, for example, Angelelli 1998.

References Albl-Mikasa, M., E. Glatz, G. Hofer and M. Sleptsova (2015), ‘Caution and Compliance in Medical Encounters: Non-interpretation of Hedges and Phatic Tokens’, Translation and Interpreting, 7 (3): 76–89. Alvarez, S. (2014), ‘Translanguaging Tareas: Emergent Bilingual Youth as Language Brokers for Homework in Immigrant Families’, Language Arts, 91 (5): 326–39. Angelelli, C. V. (2016), ‘Looking back: A Study of (Ad Hoc) Family Interpreters’, European Journal of Applied Linguistics, 4 (1): 5–32. Angelelli, C. V. (2015), ‘Justice for All? Issues faced by linguistic minorities and border patrol agents during interpreted arraignment interviews’, in Maribel del Pozo Treviño and María Jesús Blasco Mayor (Guest eds.), MonTI Monografías de Traducción e Interpretación Special Issue on Legal Interpreting, Vol. 7, 181–205. Angelelli, C. V. (2010), ‘A Professional Ideology in the Making: Bilingual Youngsters Interpreting for Their Communities and the Notion of (No) Choice’, Translation and Interpreting Studies, 5 (1): 94–108. Angelelli, C. V. (2008), ‘The Role of the Interpreter in the Healthcare Setting: A Plea for a Dialogue between Research and Practice’, in Carmen Valero Garcés and Anne Martin, (ed.), Building Bridges: The Controversial Role of the Community Interpreter, 139–152, Amsterdam/Philadelphia: John Benjamins. Angelelli, C. V. (2007), ‘Accommodating the Need for Medical Interpreters: The California Endowment Interpreter Testing Project’, The Translator, 13 (1): 63–82. Angelelli, C. V. (2006), ‘Designing Curriculum for Healthcare Interpreter Education: A Principles Approach’, in C. Roy, (ed.), New Approaches to Interpreter Education, 23–46, Washington DC: Gallaudet University Press. Angelelli, C. V. (2005), ‘Healthcare Interpreting Education: Are We Putting the Cart Before the Horse?’, The ATA Chronicle, 34 (11): 33–8, 55. Angelelli, C. V. (2004a), Medical Interpreting and Cross-cultural Communication, Cambridge: Cambridge University Press. Angelelli, C. V. (2004b), Revisiting the Interpreter’s Role: A Study of Conference, Court, and Medical Interpreters in Canada, Mexico and the United States, Amsterdam/ Philadelphia: John Benjamins. Angelelli, C. V. (1998), ‘A Service-Learning Component in a Translation Course: A Report from Stanford University’, in M. J. O’ Keeffe (ed.), Proceedings of the 38th

Non-professional Interpreting and Translation (NPIT)

133

Annual Conference of the American Translators Association, 375–80, Alexandria, VA: American Translators Association. Angelelli, C.V., K. Enright and G. Valdés (2003), Developing the Talents and Abilities of Linguistically Gifted Bilingual Students: Guidelines for Developing Curriculum at the High School Level, The National Center on the Gifted and Talented, University of Connecticut, University of Virginia, Yale University. Angelelli, C. V. and H. Jacobson, eds (2009), Testing and Assessment in Translation and Interpreting Studies, Amsterdam/Philadelphia: John Benjamins Publishing Company. Angelelli, C. V. and H. Slatyer (2012), ‘Exploring Australian Healthcare Interpreters’ Perceptions and Beliefs about their Role’, paper presented at ATISA Sixth Biennial Conference, South Padre Island, Texas, USA. Antonini, R., L. Cirillo, L. Rossato and I. Torresi, eds (2017), Non-professional Interpreting and Translation. State of the Art and Future of an Emerging field of Research, Amsterdam/Philadelphia: John Benjamins. Asare, E. (2015), ‘Ethnography of Communication’, in C. V. Angelelli and B. Baer (eds), Researching Translation and Interpreting, 212–20, London/New York: Routledge. Auer, P. and L. Wei, eds (2007), Handbook of Multilingualism and Multilingual Communication, Berlin & New York: Mouton de Gruyter. Badahir, S. (2010), ‘The Task of the Interpreter in the Struggle of the Other for Empowerment: Mythical Utopia or Sine Qua Non of Professionalism’, Translation and Interpreting Studies, 5 (1): 124–40. Baraldi, C. and L. Gavioli, eds (2017), Coordinating Participation in Dialogue Interpreting. Amsterdam: John Benjamins. Berk-Seligson, S. (2010), The Bilingual Courtroom: Court Interpreters in the Judicial Process, 2nd edn, Chicago: Chicago University Press. Berk-Seligson, S. (2011), ‘Negotiation and Communicative Accommodation in Bilingual Police Interrogations: A Critical Interactional Sociolinguistic Perspective’, International Journal of the Sociology of Language, 2011 (207): 29–58. Bialystok, E. and K. Hakuta (1999), ‘Confounded Age: Linguistic and Cognitive Factors in Age Differences for Second Language Acquisition’, in D. Birdsong (ed.), Second Language Acquisition and the Critical Period Hypotheses, 161–82, London/New Jersey: Lawrence Erlbaum Associates. Borrero, N. (2006), Bilingual Adolescents as Young Interpreters in Middle School: Impact on Ethnic Identity and Academic Achievement, PhD Diss., Stanford University, Stanford, CA. Braun, S. (2012), ‘Recommendations for the Use of Video-Mediated Interpreting in Criminal Proceedings’, in S. Braun and J. Taylor (eds), Videoconference and Remote Interpreting in Criminal Proceedings, 301–28, Cambridge/Antwerp: Intersentia. Buriel, R., J. Love and T. De Ment (2006), ‘The Relationship of Language Brokering to Depression and Parent-child Bonding among Latino Adolescents’, in M. Bornstein and L. Cote (eds), Acculturation, Parent-child Relationships, and Child development:

134

The Bloomsbury Companion to Language Industry Studies

Measurement and Development, 249–70, Mahwah, NJ: Lawrence Erlbaum Associates. Cambridge, J. (1999), ‘Information Loss in Bilingual Medical Interviews through an Untrained Interpreter’, The Translator, 5 (2): 201–19. Chan, A. (2009), ‘Effectiveness of Translator Certification as a Signaling Device: Views from the Translator Recruiters’, Translation and Interpreting Studies, 4 (2): 155–72. Chodkiewicz, M. (2012), ‘The EMT Framework of Reference for Competences Applied to Translation: Perceptions by Professional and Student Translators’, Journal of Specialised Translation, 17: 37–54. Cirillo L. (2017), ‘Child Language Brokering in Private and Public Settings: Perspectives from Young Brokers and Their Teachers’, in R. Antonini, L. Cirillo, L. Rossato, and I. Torresi (eds), Non-professional Interpreting and Translation. State of the Art and Future of an Emerging Field of Research, 295–314, Amsterdam/Philadelphia: John Benjamins. Colina, S. (2008), ‘Translation Quality Evaluation. Empirical Evidence for a Functionalist Approach’, The Translator, 14 (1): 97–134. Colina, S. and C. V. Angelelli (2015), ‘Translation and Interpreting Pedagogy’, in C. V. Angelelli and B. Baer (eds), Researching Translation and Interpreting, 108–17, London/New York: Routledge. Davidson B. (2000), ‘The Interpreter as Institutional Gatekeeper: The SocialLinguistic Role of Interpreters in Spanish-English Medical Discourse’, Journal of Sociolinguistics, 4: 305–79. Evrin, F. and B. Meyer, eds (2016), ‘Non-professional Interpreting and Translation: Transnational Cultures in Focus’, European Journal of Applied Linguistics, 4 (1): 5–32. Flores, G., M. Laws, S. J. Mayo, B. Zucherman, M. Abreu, L. Medina and E. J. Hardt (2003), ‘Errors in Medical Interpretation and Their Potential Clinical Consequences in Pediatric Encounters’, Pediatrics, 111 (1): 6–14. Flynn, P. and L. van Doorslaer (2016), ‘City and Migration: a Crossroads for Non-Institutionalized Translation’, European Journal of Applied Linguistics, 4 (1): 73–92. Garcia E. A., L. C. Roy and P. J. Okada (2004) ‘A Comparison of the Influence of Hospital-trained, Ad Hoc, and Telephone Interpreters on Perceived Satisfaction of Limited English-proficient Parents Presenting to a Pediatric Emergency Department’, Pediatric Emergency Care, 20: 373–8. Gile, D. (1995), Basic Concepts and Models for Interpreter and Translator Training, Amsterdam: John Benjamins. Grbic, N. (2010), ‘“Boundary work” as a Concept for Studying Professionalization Processes in the Interpreting Field’, Translation and Interpreting Studies, 5 (1): 109–23. Hamers, J. and M. Blanc (2000), Bilingualism and Bilinguality, 2nd edn, Cambridge: Cambridge University Press.

Non-professional Interpreting and Translation (NPIT)

135

Harris, B. (2009), ‘Unprofessional Translator’, Blogspot UK 2009. Available online: www.unprofessionaltranslation.blogspot.com (accessed 31 December 2018). Harris, B. and B. Sherwood (1978), ‘Translating as an Innate Skill’, in D. Gerver and H. W. Sinaiko (eds), Language Interpreting and Communication, 155–70, New York: Plenium Press. Hlavac, J. (2011), ‘Sociolinguistic Profiles of Users and Providers of Lay and Professional Interpreting Services: The Experiences of a Recently Arrived Iraqi Language Community in Melbourne’, The International Journal for Translation and Interpreting Research, 3 (2): 1–32. Hlavac, J. (2017), ‘Brokers, Dual-role Mediators and Professional Interpreters: A Discourse-based Examination of Mediated Speech and the Roles that Linguistic Mediators Enact’, The Translator, 23 (2): 197–216. Hua, J. M. and C. L. Costigan (2012), ‘The Familial Context of Adolescent Language Brokering within Immigrant Chinese Families in Canada’, Journal of Youth and Adolescence, 41 (7): 894–906. ISO 13611:2014, Guidelines for Community/Public-service Interpreting. Available online: https://www.iso.org/standard/54082.html [accessed 31 December 2018]. ISO 18587:2017, Translation Services – Post-editing of Machine Translation Output. Available online: https://www.iso.org/standard/62970.html?browse=tc (accessed 31 December 2018). ISO 20109 (2016), Simultaneous Interpreting - Equipment - Requirements, Geneva: ISO. ISO/TS 11669:2012, Translation Projects – General Guidance. Available online: https:// www.iso.org/standard/50687.html?browse=tc (accessed 31 December 2018). Katan, D. (2009), ‘Occupation or Profession: A Survey of the Translator’s World’, Translation and Interpreting Studies, 4 (2): 187–209. Lee, K. C., J. P. Winickoff, M. K. Kim, E. G. Campbell, J. R. Betancourt, E. R. Park, A. W. Maina and J. S. Weissman (2006), ‘Resident Physicians’ Use of Professional and Nonprofessional Interpreters: A National Survey’, JAMA, 296 (9): 1050–3. Lee L. J., H. A. Batal, J. H. Maselli and J. S. Kutner (2002), ‘Effect of Spanish Interpretation Method on Patient Satisfaction in Urban Walk-in Clinic’, Journal of General Internal Medicine, 17 (8): 641–6. Malakoff, M. and K. Hakuta (1991), ‘Translation Skills and Metalinguistic Awareness in Bilinguals’, in E. Bialystok (ed.), Language Processing in Bilingual Children, 141–66, London: Cambridge University Press. Marcus, E. (2003), ‘When a Patient Is Lost in the Translation’, New York Times, 8 April: F7. Available online: https://www.nytimes.com/2003/04/08/health/cases-when-apatient-is-lost-in-the-translation.html (accessed 31 December 2018). Martínez Gomez, A. (2016), ‘Facing Face: Nonprofessional Interpreting in Prison Mental Health Interviews’, European Journal of Applied Linguistics, 4 (1): 93–115. Meyer, B. (2012), ‘Ad Hoc Interpreting for Partially Language-Proficient Patients: Participation in Multilingual Constellations’, in C. Baraldi and L. Gavioli,

136

The Bloomsbury Companion to Language Industry Studies

Coordinating Participation in Dialogue Interpreting, 99–114, Amsterdam/ Philadelphia: John Benjamins. Morales L. S., W. E. Cunningham, J. A. Brown, H. Liu and R. D. Hays (1999), ‘Are Latinos Less Satisfied with Communication by Health Care Providers?’, Journal of General Internal Medicine, 14 (7): 409–17. Napier, J. (2017), ‘Not Just Child’s Play: Exploring Bilingualism and Language Brokering as a Precursor to the Development of Expertise as a Professional Sign Language Interpreter’, in R. Antonini, L. Cirillo, L. Rossato and I. Torresi (eds), Non-professional Interpreting and Translation. State of the Art and Future of an Emerging Field of Research, 360–81, Amsterdam/Philadelphia: John Benjamins. Orellana, M. F. (2003), ‘Responsibilities of Children in Latino Immigrant Homes’, New Directions for Youth Development, 100: 25–39. Orellana, M. F., L. Dorner and L. Pulido (2003), ‘Accessing Assets: Immigrant Youth’s Work as Family Translators or “Paraphrasers”’, Social Problems, 50 (4): 505–24. Orellana, M. F. and J. Reynolds (2008), ‘Cultural Modeling: Leveraging Bilingual Skills for School Paraphrasing Tasks’, Reading Research Quarterly, 43 (1): 48–65. PACTE (2008), ‘First results of a Translation Competence Experiment: Knowledge of Translation and Efficacy of the Translation Process’, in J. Kearns (ed.), Translator and Interpreter Training. Issues, Methods and Debates, 104–26, London: Continuum. Perez-Gonzalez, L. and S. Susam-Sarajeva (2012), ‘Non-professionals Translating and Interpreting: Participatory and Engaged Perspectives’, The Translator, 18 (2): 149–65. Roberts, R. (1994), ‘Community Interpreting Today and Tomorrow’, in P. Krawutschke (ed.), Proceedings of the 35th Annual Conference of the American Translators Association, 127–138, Medford, NJ: Learned Information. Sawyer, D., F. Butler, J. Turner and I. Stone (2002), ‘Empirically-based Test Design and Development for Telephone Interpreting’, Language Testing Update, 31: 18–19. Sela-Sheffy, R. and M. Shlesinger, eds (2009/2010), ‘Profession, Identity and Status’, Translation and Interpreting Studies 4 (2)/5 (1). https://benjamins.com/catalog/tis Setton, R. and A. Lianglang (2009), ‘Attitudes to Role, Status and Professional Identity in Interpreters and Translators with Chinese in Shanghai and Taipei’, in R. SelaSheffy and M. Shlesinger (eds), Identity and Status in the Translatorial Professions, 89–118, Amsterdam/Philadelphia: John Benjamins. Smith, D. (2013), ‘Mandela Memorial Interpreter Says He Has Schizophrenia’, The Guardian 12 December. Available online: https://www.theguardian.com/world/2013/ dec/12/mandela-memorial-interpreter-schizophrenia-sign-language (accessed 31 December 2019). Stanfield, C. and W. Hewitt (2005), ‘Examining the Predictive Validity of a Screening Test for Court Interpreters’, Language Testing, 22 (4): 438–62. Swabey, L. and P. Gajewski Mickelson (2008), ‘Role Definition. A Perspective on Forty Years of Professionalism in Sign Language Interpreting’, in C. Valero-Garcés and A.

Non-professional Interpreting and Translation (NPIT)

137

Martin (eds), Crossing Borders in Community Interpreting: Definitions and Dilemmas, 51–80, Amsterdam/Philadelphia: John Benjamins. Torikai, K. (2010), ‘Conference Interpreters and Their Perceptions of Culture. From the Narrative of Japanese Pioneers’, Translation and Interpreting Studies, 5 (1): 75–93. Toury, G. (1984), ‘The Notion of Native Translator and Translation Teaching’, in W. Wilss and G. Thome (eds), Translation Theory and Its Implementation in the Teaching of Translating and Interpreting, 186–195, Tübingen: Gunter Narr. Tse, L. (1996), ‘Language Brokering in Linguistic Minority Communities: The Case of Chinese and Vietnamese-American Students’, Bilingual Research Journal, 20 (3–4): 485–98. Valdés, G. (2003), Expanding Definitions of Giftedness. The Case of Young Interpreters from Immigrant Communities, Mahwah, NJ: Lawrence Erlbaum Associates. Valdés, G. and C. V. Angelelli (2003), ‘Interpreters, Interpreting and the Study of Bilingualism’, The Annual Review of Applied Linguistics, 23: 58–78. Valdés, G., C. Chávez and C. V. Angelelli (2000), ‘Bilingualism from Another Perspective: The Case of Young Interpreters from Immigrant Communities’, in A. Roca (ed.), Research on Spanish in the United States. Linguistic Issues and Challenges, 42–81, Somerville, MA: Cascadilla Press. Valdés, G. and R. Figueroa (1994), Bilingualism and Testing. A Special Case for Bias, Norwood, NJ: Ablex Publishing Corporation. Valdés, G. and M. Geoffrion-Vinci (1998), ‘Chicano Spanish: The Problem of the Underdeveloped “Code” in Bilingual Repertoires’, Modern Language Journal, 82 (4): 473–501. Winston, E. and C. Monikowski, eds (2013), Evolving Paradigms in Interpreter Education, Washington, DC: Gallaudet University Press. Wolf, M. and A. Fukari, eds (2007), Constructing a Sociology of Translation. Amsterdam/ Philadelphia: John Benjamins. Yee, L., M. Diaz and T. Spitzer (2003), ‘California Assembly Bill 292’. Available online: http://leginfo.legislature.ca.gov/faces/billNavClient.xhtml?bill_id=200320040AB292 (accessed 31 December 2018). Zentella, A. C. (1997), Growing up Bilingual: Puerto Rican Children in New York, Oxford: Blackwell Publishers.

138

7

Tailoring translation services for clients and users Kaisa Koskinen

1. Introduction We commonly talk about translation services and thus define translation as a service, offered by language service providers (LSPs). The aim of this chapter is, first, to discuss what a service actually entails and, second, to argue for the benefits of specialized service design methods in tailoring translation services. One such methodology has been designed specifically for translation: usercentred translation (UCT) offers an array of methods to enhance and ensure translations match the needs and expectations of their future users. The focus of UCT is on the end users, but LSPs can also use similar methods to design services for their clients. I argue that keeping this distinction between the end users of translations and the clients or buyers interacting with the LSP is crucially important, and so is also taking care of the needs of both. As the translation market is once again being reshaped by technological changes, the only way for contemporary translation agencies to survive into the next cycle of development may well be both through an increased transparency on what kinds of tailored services clients can expect to gain from partnering with a particular translation agency (as opposed to using raw MT or buying post-edited MT translations on an online platform), and on how that LSP ensures that the resulting end products truly match the needs and expectations of the end users of the translations in various locales. In contemporary discussions of professional translation, technology is a dominant feature: the promise and delivery of machine translation is either hyped or mocked, and technology-driven solutions for the eternal questions of quality and efficiency are being sought. It is clear by now that technological advances have significantly changed translation processes, making turnaround

140

The Bloomsbury Companion to Language Industry Studies

times radically shorter. Digitalization, increasing computing power and cloudbased approaches have revolutionized, and seem to keep on revolutionizing, industry processes. Technological advances are resulting in translation being perceived of as a form of utility, as an instantaneous and ubiquitous service expected to be available in any language pair, in any (digitally connected) locale (Choudhury and McConnell 2013). The segment-for-segment approach of translation memory and machinetranslation-based computerized translation reduces translation to a linear process of matching chunks of text with an equivalent chunk of text modelled by or repeating previous similar chunks. As such, it is a radical deviation from both hermeneutical translation theories emphasizing interpretation and creative rewriting (Stolze 2003) and from target-oriented approaches attendant to norms and cultural expectations (Toury 1995), as well as from functional approaches that focus on the purpose of the translation in determining the desired target text features (Reiss and Vermeer 1984). This is not a new revelation. Anthony Pym, for example, has been talking about the return of the simplistic and linear view of translation for quite some time (e.g. 2007: 102). My intention here is not to lament the state of the translation industry and the increasing computerization of translation. On the contrary, technology is responding to a growing need for matching multilingual content, and, in doing that, it is contributing greatly to the democratization of language practices on a global scale. My point is more focused. I argue that, in spite of the development of MT-centred approaches, the kind of translation that twentieth-century translation theories were trying to capture has not disappeared. Instead, the field has become increasingly polarized: in addition to heavily automated translation provision, there is also a need to continuously develop more individualized services for those clients who need more tailored messages for their audiences and who need to make sure the intended emotional and cognitive effects are met (e.g. in translating marketing texts, political and ideological documents, instructive texts or texts embedded in arts and entertainment). Polarization of needs was also the core takeaway of a recent survey of quality assessment tools carried out by the Translation Automation User Society and reported by Sharon O’Brien (2012: 74). Static error models based on segmentfor-segment correspondence of the end product with its source text cannot cater for all kinds of translation needs, whether they are ‘emerging’ or traditional: The TAUS QE benchmarking exercise demonstrated that the preferred method for evaluating translation as a product in the translation industry is the error

Tailoring Translation Services for Clients and Users

141

typology, with associated penalties and severity levels. This model, while appropriate in some contexts, cannot cater well for emerging content types, various communication channels and new needs. A more dynamic approach to QE seems to be needed by at least some members of the translation production sector.

It may well be that, in the future, some tailor-made multilingual services will not be labelled translation, as the term ‘translation’ may eventually begin to denote machine translation only. Some agencies are already marketing more creative forms of translation with the label of transcreation, and journalistic combinations of translating and rewriting are called journalation and transediting, among other names. Regardless of which labels we choose to use, LSPs will continue to find lucrative business models beyond the translationas-utility paradigm, in genres where finding just the right tone for a particular group of addressees, or when creating a recognizable voice for a strong business brand, is essential. These services require customer insight and knowledge of the end users’ needs and preferences. Technology-driven innovations and the need-for-speed attitude have pushed many translation agencies into competitive bidding where successful business models are difficult to maintain. It therefore also makes good business sense to steer away from the harsh competition by prices only, and to sail to the bluer seas of innovation (Kim and Mauborgne 2005) through diversified service offering (Adams 2013), and by designing language products and services with added value beyond automated linguistic operations. The notion of translation as a service is widespread among LSPs. One can, however, argue that recently the focus of attention has been more on the aspect of (fast, accurate and economic) provision than on what a service might actually entail. A service can, of course, also be delivered in the form of offering a standardized utility for the masses (e.g. electricity or machine translation as a utility), but in the categorization of different kinds of services, translation is typically defined as a knowledge-intensive professional service (AarikkaStenroos 2010: 9). In economic service studies, service tends to be seen in active terms, as processes where clients and providers ‘work together to transform some state’, applying the competences, capabilities, knowledge, skills and resources of the provider to help clients transform their businesses (Spohrer and Maglio 2008: 238). The authors particularly emphasize the element of co-production, leading to co-creation of value (240). A translation service, it follows, consists of both the professional competences offered by the translation

142

The Bloomsbury Companion to Language Industry Studies

provider and an element of cooperation with the client in creating optimized service processes. Professional services are defined by characteristics such as a high degree of customization, information asymmetry between the client and the service provider and a related risk factor to the client (Aarikka-Stenroos 2010: 10). In short, clients do not necessarily understand what is essential to the service, and they often cannot judge the outcome. To be meaningful for customers, services have to be designed to meet their needs and expectations, and in dialogic interaction with them. The aim is to design services that are user-friendly, competitive and relevant to the customers. To achieve this, the design process needs to be user-centred, holistic and co-creative, and based on sequencing and evidencing the service process (Stickdorn and Schneider 2010). Simply put, service design means making a service meet the user’s and customer’s needs (Interaction Design Foundation 2017; emphasis added). This differentiation of users and customers, as well as the notion of numerous stakeholders both internal and external to the process, is useful for translation studies as well. The end users of translation services typically consist of the clients of the clients of translation service providers, and catering to the needs of the end users by designing texts that they find acceptable, usable and aesthetically pleasing is therefore in the best interest of the clients as well. At the same time, the clients purchasing translation services are encountering various processes (e.g. project management and invoicing), and translation service providers can also enhance the service experience of their clients and enlarge the service palette available to them through specialized and interactive service design processes. Many LSPs are undoubtedly already using methods related to service design, with or without using that label and with more or less complete engagement with the ideology of service design thinking. This chapter discusses recent ideas developed in translation studies that translators and LSPs can apply to make their user-centred or client-oriented aspirations more focused, more consistent and robust and easier to document.

2.  Research focal points The relevance of future readers for shaping a translation has always been recognized by translators and translation scholars alike. Household names in translation history from Nida (functional equivalence) to Vermeer (skopos) have emphasized the need to target the translation for the needs of its users. Reception

Tailoring Translation Services for Clients and Users

143

of translation is now a long-standing research interest in translation studies, and the fields of literary and audiovisual translation in particular have been studied extensively (for an overview, see Suojanen, Koskinen and Tuominen 2015a). These genres are not the most relevant ones for most LSPs, but the accumulating findings are providing useful takeaways for the translation industry as well, for example, in the areas of cross-cultural differences in preferences and tastes (e.g. in translating humour; Chiaro 2010), and the role of group pressure and unconscious viewing practices in reported experience (Tuominen 2012). In recent years, increasing research activity has focused on raw and post-edited machine translation, arousing new questions of the usability of less-thanperfect translation products. These studies are beginning to reveal real users’ user experiences that may be counterintuitive to many language professionals with regard to the tolerance of less-than-perfect translations (see, for example, Doherty and O’Brien 2012; Bowker 2015). Reception studies of all kinds have typically focused on published translations. As they have been conducted post facto and outside the production cycle, there has rarely been a direct feedback loop to translation practice. Still, the results of reception studies, particularly those with real end users, produce direct takeaways for the industry. Once such research starts to accumulate, and as long as there are feedback loops from research to the industry, the potential for direct impact on practices will increase. Readers are close to translators’ hearts. Most translators would agree with the idea that they have their readers’ best interest in mind when they translate, but more often than not, what is ‘best’ for the reader is decided on by the translator, not defined in and through active interaction with the readers. Translators are not typically engaged in the early phases of localization processes, and translation is often outsourced. The model of UCT, recently developed by the present authors at Tampere University, Finland, aims to shift translation to a more central position in the production cycle, and the focus away from equivalence-based error typologies, towards assessing the success of translation against the real needs of its users. UCT is defined as follows: ‘In user-centered translation, information about users is gathered iteratively throughout the process and through different methods, and this information is used to create a usable translation’ (Suojanen, Koskinen and Tuominen 2015a: 4). Building on long-standing research efforts in usability research, UCT sets out to provide a set of practical tools and methodologies for user-centred translation, ranging from mental models, such as personas, to usability heuristics, to translation and usability tests and eye-tracking experiments in laboratory environments. The core publication (Suojanen, Koskinen and Tuominen 2015a) explains a number of methods, but it is clear

144

The Bloomsbury Companion to Language Industry Studies

these are only a handful of examples of what the field of usability research can offer, and the authors encourage readers to explore the area further. The UCT model was first introduced in Finland in 2012, and it has aroused quite a lot of interest in the local translation community, as well as internationally. Translators experience affinity with the reader-oriented ideology of the model, and translation trainers have found the exercises easy to apply in their teaching (the book is written in a textbook format). Many translators who have given us their feedback on the UCT ideas, having first emphasized how they themselves find the ideas and methods very useful, have expressed doubts that the agencies would ever go for these. The agencies, in turn, have been enthusiastic, but lamented that they find it difficult to sell these ideas to their translators. Some dialogue is now underway, but it remains to be seen whether UCT can really deliver on its promise. What both translators and agency representatives may be overlooking in these responses, however, is the crucial importance of selling UCT (or any other model) not to one another, but to their clients, and of creating new business models around UCT competences, capabilities, knowledge, skills and resources to help their clients transform their businesses. In other words, expertise in UCT methods opens up possibilities for service design projects with clients who wish to rethink their localization policies. UCT who wish to rethink their localization policies works particularly well for customers who also apply a user-centred business model in their own activities, and user-friendliness is an ideology many businesses ascribe to today. It makes good business sense for the client who has invested in the usability of their product, and potentially also in the source-language documentation, to also invest in making sure that other language versions will be equally usable for their target audiences. This chapter is based on the division between end users (i.e. the readers the translated document is intended for) and clients (i.e. the customers ordering and paying for the translation) (cf. somewhat different definitions in EnglundDimitrova and Ehrensberger-Dow 2016: 5). Whereas readers have long been the focus of attention in translation studies, clients and commissioners have been more seldom studied (cf., however, Havumetsä 2012 and Risku, Pein-Weber and Milošević 2016). Even in today’s translation studies, the now classic treatise by Justa HolzMänttäri (1984) is still unusually detailed in its listing of actors involved in translatorial action. She (1984: 109) identifies six key roles: ●●

●●

●●

the initiator, who needs a translation for a purpose; the client, who commissions translatorial action; the source text producer;

Tailoring Translation Services for Clients and Users ●●

●●

●●

145

the translator; the user of translation; and the addressee, who is the end user of the translation.

As one can see, in Holz-Mänttäri’s actor-based model, the protagonists of this chapter, the client and the end user, are both just nodes in a larger network of roles. While in practice these roles may sometimes be conflated into one person, analytically it is meaningful to keep them separate, as different stakeholders involved in translatorial action. The client, as the one who acts as the buyer, may not necessarily have full information on the initiator’s purpose, and this may lead to the kinds of information gaps translators often find frustrating (Risku, Pein-Weber and Milošević 2016). Similarly, the client is not typically the user of the translation and also not necessarily fully aware of the intended uses or the end users targeted. Another plausible explanation for confusions in the network lies in translation being a (product-oriented) service, and services are known to be more difficult to market than products. It has also been argued that translation is a particularly difficult service to sell or buy (Aarikka-Stenroos 2010: 5, 14). If translation is, indeed, a difficult service to sell, there is all the more reason to turn to service design to gain a deeper understanding of the current and future needs of the clients and to innovate new services to match these needs. Risku, Pein-Weber and Milošević (2016) discuss the growing expectations by the customers for translation agencies to offer services beyond translation, and the increasingly complex contexts the clients need to juggle. Existing translation studies literature indicates that there is room for improvement of practices in managing people across the spectrum of roles (Combe 2011). Project managers, a group not yet found in Holz-Mänttäri’s list from 1984, have been identified as the middlemen responsible for managing people in a translation process (Risku 2002). If this management angle fails and translators feel left out, they will not produce the best quality of work and may exit the career entirely (Abdallah 2010). It has also been discovered that clients and translators need to be supported in defining their roles and responsibilities (Risku, Pein-Weber and Milošević 2016). Finally, catering for end users’ user experience has been identified as a new criterion for a successful end product (Bowker 2015; Suojanen, Koskinen and Tuominen 2015a). Indeed, emotion-based sense of experience can be seen as a key element of contemporary culture and therefore of growing (economic) importance for any business (Sundbo and Sørensen 2013: 8), language services included. Recent advances in language technology have shifted research and innovation activities in the direction of quantifiable equivalences and productivity gains.

146

The Bloomsbury Companion to Language Industry Studies

The next wave of success may well come from the opposite direction of learning to respond to the affective needs and desires of those involved. Service design and UCT are both based on a human-centric approach in the sense of stepping into the other person’s shoes and empathetically understanding their experience (Cook et al. 2002). They are interactive and iterative approaches based on direct communication and on active listening and observing of actual practices. The methods include interviewing, shadowing and observation, modelling, testing and revising. A number of playful methods can be used. For example, personas (Suojanen, Koskinen and Tuominen 2015b) or love letters (Koskinen and Ruokonen 2017) are often not only illuminating but also fun to use. The division of roles, and the growing complexity of the client/buyer roles, can become obstacles for bringing in service design or UCT activities, with their expectations of intensive collaboration and knowledge sharing. Another obstacle may come from the research side. Whereas the UCT model for enhancing the usability of translations for end users can be built on a long tradition of readeroriented approaches, translation studies literature offers much less support for understanding translation from a service viewpoint. The language industry studies approach this volume propagates for is only emerging, and business and people skills are only beginning to be included in translator training programmes and research agendas, although their relevance has been increasingly recognized (Aarikka-Stenroos 2010: 6). This is a gap the discipline needs to fill in the future. Interesting new avenues for research could perhaps be found in the direction of service studies (or serviceology or service science), which looks at service management and service innovation across fields. Within this framework, some researchers are also looking at multilingual communication from a service point of view and basing language technology solutions on iterative and user-centred design processes (Lin and Ishida 2014).

3.  Informing research through the industry Both UCT and service design are rooted in translation practice rather than translation theory, and many of their central elements may be found in current language industry methods and practices, albeit not necessarily under these labels and not always applied in a methodological manner. It is therefore pointless to develop them as a research exercise only, without direct input from the industry. As the above section indicates, there are gaps in translation studies research in the areas of service design, customization and diversification of

Tailoring Translation Services for Clients and Users

147

business models. This is where dialogue with industry partners can significantly enrich research agendas and support a realistic modelling of practices with high levels of applicability in the industry. The translation industry is also a massive repository of empirical data waiting to be uncovered. In addition to translation corpora generated by the use of CAT tools, LSPs routinely collect digitalized project management data that could be used to elicit research data and to co-design research questions that would both generate new research knowledge and provide new viewpoints to operationalize business models. The most interesting, and most under-studied, element might, however, be found in the mundane, everyday interactions between buyers and sellers, clients and project managers and between project managers and translators, as well as in following the route of the translations down to the actual end users. One obvious avenue that unites UCT and service design, and which is also linked to a current paradigm in translation research, is ethnography. Observational fieldwork studies in translation agencies, in client organizations and among end users can be used in either research-oriented or industry-oriented ways to gain insight into the life-worlds of the clients and end users (Drugan 2013). Such research would allow new knowledge to emerge, as the existing ethnographic research in translation studies tends to be fairly ‘translator-centred’ (Ehrensberger-Dow 2017). Researching translation as a service, in particular, would definitely need to be organized around information gathering of current practices, either via ethnographic observation or via user interviews among different client segments and in different agencies. It is reasonable to assume that LSPs have approached the issue of service design and customization in multiple ways, ranging from extremely active and interactive approaches, to entirely passive ones. Detailed ethnographic studies would allow researchers to tap into this rich pool of practical innovations and to gain a better understanding of the current realities of the language industry.

4.  Informing the industry through research The UCT model is built on the belief that the translation industry can significantly improve its business practices and its language service provision by taking the end users seriously and by developing processes where the translation products and services can be evaluated and tested by actual users in real or realistic situations. Prototyping and usability testing can reveal preferences significantly different from what might have been anticipated by the team. For example, in an

148

The Bloomsbury Companion to Language Industry Studies

experiment conducted by translation students testing the usefulness of usability testing in a real translation assignment, an unexpected preference against informal and colloquial style in teaching material was found among participants, and the translation style needed to be adapted to suit not the taste of the translators but the taste of the participants who shared the demographic traits of the intended users (Suokas et al. 2015). The design for the aforementioned test was low-tech and easy to organize once suitable participants were available. A more technologically demanding but visually compelling method for researching stylistic preferences, readability issues and understandability of web content can be found in eye-tracking studies: heat maps can be used to visualize the reading patterns, gaze fixations and speed of task completion in comparative studies of multilingual content. These results can then be used to diagnose potential usability issues and to compare service provision for users in different languages (Doherty and O’Brien 2012). Eye-tracking and usability tests are equally useful for testing material translated for the clients (for UCT purposes) and for testing the functionalities of the services of the translation agency itself (as data for service design activities). Both in the area of targeting the end users of translations and in the area of tailoring services to clients’ needs, numerous opportunities for research that can support service innovation are opened up. Although a number of usability methods can be employed without empirical research on actual users, and the effectiveness of these can be tested in cooperation, the most promising avenue for innovation lies in researching the real users. Since service design and UCT focus on real users, and aim to capture the specific rather than the average, the research impetus focuses on a detailed understanding of individuals. Both UCT and service design emphasize the role and allure of ‘little data’, that is, a detailed understanding of individual clients and end users. Together they provide a plethora of methods for individualized service provision to cover both that end of the market which favours fully automated 24/7 translations with a fast turnaround and the end that yearns for tailor-made ‘brand-perfect’ translations (Whiteman 2016). Cooperation with researchers trained in usability methods and ethnographic research will allow LSPs to gain new insights into their own practices and to the needs and desires of their clients. One issue that repeatedly comes up in discussions with industry stakeholders is the return on investment in UCT, and the same question can be asked of service design. As these approaches are designed to capture and match the expectations of a particular group of users, how are the time and resources put into these methods paying back? Indeed, services are defined as intangible and

Tailoring Translation Services for Clients and Users

149

perishable (Aarikka-Stenroos 2010: 8), whereas the notion of translation as a utility is built on the ideas of recycling and automation. It may well be that those LSPs heavily invested in translation technology will find UCT and service design initially unappealing because of the conflict of ideologies. However, they might do so at their own risk and peril: the utility approach will only become more fully automated in the future, and the more MT is brought into the picture, the more important it will be to know exactly when it can be used successfully and how the use of MT affects user experiences in different user groups.

5.  Concluding remarks Both UCT and service design aim to capture the unique expectations and needs of particular customers. The generalizability of research results in UCT and service design efforts, and the scalability of practices from one context to numerous others, is, however, a crucial issue in terms of making the efforts worthwhile. While the whole idea is to customize services and end products, this customization needs to also lead to models and processes that can be repeated. By combining a more fine-grained understanding of the needs and preferences of particular types of clients and segments of users with empirical evidence of user satisfaction on different translation methods and strategies, LSPs will be able to feed the findings into regular workflows. Cooperation with researchers allows them to design evidence-based practices with proven results for particular purposes and audiences. UCT methods have been taught in Finnish universities for several years now (Suojanen, Koskinen and Tuominen 2015b; Suokas et al. 2015), and they are gaining ground internationally as well. Since UCT and service design are closely linked, in terms of both aims and methods, these UCT skills can also be used as a stepping stone towards service design projects targeting the clients. The universities are, in other words, training people who already possess skills related to service design tasks in industry employment. To date, these new competences have gone largely unnoticed by the LSPs, who have been more focused on getting academic support for their technological innovation in general and MT endeavours in particular. The human element, however, is also relevant for all communication, and there is room for new business models also beyond MT. New recruits with academic translator training can bring in new ideas and new practices. For more intensive cooperation, joint doctorates in university– industry partnerships could open up new innovative avenues. These are still rare.

150

The Bloomsbury Companion to Language Industry Studies

Even more rare is the hiring of postdocs for research and development posts in LSPs. Translation studies PhDs often have a background in practical translator or interpreter training, and they frequently pursue pragmatic research questions in their theses. Still, it is not yet common practice to employ them in the translation industry for research and innovation tasks. This is, in a way, good news: there is a significant but largely untapped potential for business innovation in bringing researchers into a close dialogue with their industry partners.

References Aarikka-Stenroos, L. (2010), ‘Translating Is a Service and Service Business, Too – Building Up “Business Know How” in Translating Studies’, in M. Garant (ed.), Current Trends in Translation Teaching and Learning Volume III, 3–34, Helsinki: University of Helsinki. Available online: http://www.cttl.org/ uploads/5/2/4/3/5243866/currenttrendstranslation2010.pdf (accessed 11 August 2017). Abdallah, K. (2010), ‘Translators’ Agency in Production Networks’, in T. Kinnunen and K. Koskinen (eds), Translators’ Agency. Tampere Studies in Language, Translation and Culture B4, 11–46, Tampere: Tampere University Press. Available online: http://urn.fi/urn:isbn:978-951-44-8082-9 (accessed 11 August 2017). Adams, N. (2013), Diversification in the Language Industry: Success beyond Translation, Bellbird Park: NYA Communications. https://trove.nla.gov.au/work/185011397?q&v ersionId=201465665 Bowker, L. (2015), ‘Translatability and User eXperience: Compatible or in Conflict?’, Localisation Focus. The International Journal of Localisation, 14 (2): 15–27. Chiaro, D., ed. (2010), Translation, Humour and Literature: Translation and Humour, Vol 1, London: Bloomsbury. Choudhury, R. and B. McConnell (2013), TAUS 2013 Translation Technology Landscape Report. Available online: https://www.taus.net/think-tank/reports/translate-reports/ taus-translation-technology-landscape-report (accessed 14 August 2017). Combe, K. R. (2011), ‘Relationship Management: A Strategy for Fostering Localization Success’, in K. J. Dunne and E. S. Dunne (eds), Translation and Localization Project Management: The Art of the Possible, 319–45, Amsterdam: John Benjamins. Cook, L. S., D. E. Bowen, R. B. Chase, S. Dasu, D. M. Stewart and D. A. Tansik (2002), ‘Human Issues in Service Design’, Journal of Operations Management, 20: 159–74. Doherty, S. and S. O’Brien (2012), ‘A User-Based Usability Assessment of Raw Machine Translated Technical Instructions’, in Proceedings of the 10th Conference of the Association for Machine Translation Americas (AMTA), 28–31, Stroudsburg PA: AMTA. Available online: http://www.mt-archive.info/AMTA-2012-Doherty-2.pdf (accessed 13 August 2017).

Tailoring Translation Services for Clients and Users

151

Drugan, J. (2013), Quality in Professional Translation: Assessment and Improvement, London: Bloomsbury. Ehrensberger-Dow, M. (2017), ‘An Ergonomic Perspective on Translation’, in J. W. Schwieter and A. Ferreira (eds), The Handbook of Translation and Cognition, 332–49, Hoboken, NJ: John Wiley & Sons. Englund-Dimitrova, B. and M. Ehrensberger-Dow (2016), ‘Cognitive Space: Exploring the Situational Interface’, Translation Spaces, 5 (1): 1–19. Havumetsä, N. (2012), The Client Factor: A Study of Clients’ Expectations Concerning Non-literary Translators and the Quality of Non-literary Translations, PhD diss., University of Helsinki, Helsinki. Holz-Mänttäri, J. (1984), Translatorisches Handeln. Theorie und Methode. Annales Academia Scientiarum Fennicae B 226, Helsinki: Helsingin tiedeakatemia. Interaction Design Foundation (2017), ‘The Principles of Service Design Thinking – Building Better Services’. Available online: https://www.interaction-design.org/ literature/article/the-principles-of-service-design-thinking-building-better-services (accessed 7 August 2017). Kim, W. C. and R. Mauborgne (2005), Blue Ocean Strategy: How to Create Uncontested Market Space and Make the Competition Irrelevant, Boston: Harvard Business School Press. Koskinen, K. and M. Ruokonen (2017), ‘Love Letters or Hate Mail? Translators’ Technology Acceptance in the Light of Their Emotional Narratives’, in D. Kenny (ed.), Human Issues in Translation Technology, 8–24, London: Routledge. Lin, D. and T. Ishida (2014), ‘User-Centered Service Design for Multi-language Knowledge Communication’, in M. Mochimaru, K. Ueda and T. Takenaka (eds), Serviceology for Services: Selected Papers of the 1st International Conference of Serviceology, 309–17. Available online: DOI 10.1007/978-4-431-54816-4_32 (accessed 7 August 2017). Pym, A. (2007), ‘Natural and Directional Equivalence in Theories of Translation’, Target 19 (2): 271–94. Reiss, K. and H. J. Vermeer (1984), Grundlegung einer allgemeinen Translationstheorie, Tübingen: Niemeyer. Risku, H. (2002), Translationsmanagement. Interkulturelle Fachkommunikation im Informationszeitalter. Translationswissenschaft 1, Tübingen: Narr. Risku, H., C. Pein-Weber and J. Milošević (2016), ‘“The Task of the Translator”: Comparing the Views of the Client and the Translator’, International Journal of Communication, 10: 989–1008. Spohrer, J. and P. P. Maglio (2008), ‘The Emergence of Service Science: Toward Systematic Service Innovations to Accelerate Co-Creation of Value’, Production and Operations Management, 17 (3): 238–46. Stickdorn, M. and J. Schneider (2010), This Is Service Design Thinking. Basics, Tools, Cases, Amsterdam: BIS Publishers. Stolze, Radegundis (2003), Hermeneutik und Translation, Tübingen: Gunter Narr.

152

The Bloomsbury Companion to Language Industry Studies

Sundbo, J. and F. Sørensen (2013), ‘Introduction to the Experience Economy’, in J. Sundbo and F. Sørensen (eds), Handbook on the Experience Economy, 1–17, Cheltenham: Edvard Elgar. Suojanen, T., K. Koskinen and T. Tuominen (2015a), User-Centered Translation. Translation Practises Explained, London: Routledge. Suojanen, T., K. Koskinen and T. Tuominen (2015b), ‘Usability as a Focus of Multiprofessional Collaboration: A Teaching Case Study on User-centered Translation’, Connexions – International Professional Communication Journal, 3 (2): 147–66. Suokas, J., K. Pukarinen, S. von Wolff and K. Koskinen (2015), ‘Testing Testing: Putting Translation Usability to the Test’, Trans-Kom, 8 (2): 499–519. Available online: http://www.trans-kom.eu/bd08nr02/trans-kom_08_02_09_Suokas_ua_ Testing.20151211.pdf (accessed 17 August 2017). Toury, G. (1995), Descriptive Translation Studies and Beyond, Amsterdam: John Benjamins. Tuominen, T. (2012), The Art of Accidental Reading and Incidental Listening: An Empirical Study on the Viewing of Subtitled Films, PhD diss., Tampere University Press, Tampere. Available online: http://urn.fi/URN:ISBN:978-951-44-9008-8 (accessed 17 August 2017). Whiteman, C. (2016), ‘Brand-perfect Translations Drive Website Success in Emerging Markets’, Talking New Media, blog post. Available online: http://www. talkingnewmedia.com/2016/10/14/brand-perfect-translations-drive-websitesuccess-in-emerging-markets/ (accessed 17 August 2017).

8

Professional translator development from an expertise perspective Gregory M. Shreve

1. Introduction The language industry and, indeed, any business sector that employs professionals, must be concerned with ‘how well’ those professionals perform the tasks for which they have been hired. Professionals are individuals who have accumulated knowledge in a specific area of activity and are compensated, monetarily and with other benefits, for practising that activity – hence the notion of ‘professional practice’. The activities for which professionals are compensated are detailed in organizational job descriptions, or even more abstractly in a professional practice model that is standardized and disseminated in a professional community, such as those developed in domains like nursing and teaching – but not, as of yet, in the language industry. Professional practice models, whether global or local, are generally integrated with implicit or explicit performance models. These models detail mechanisms for assessing the quality of practice and determining the trajectory of improvement in performance over time – both issues that have to be dealt with organizationally in areas such as compensation and promotion. If quality of performance progresses over the period of practice, under certain conditions practitioners may come to be regarded as ‘experts’, or individuals who are demonstrably very good at performing the tasks associated with their jobs; they are much better than average practitioners using objective measures as might be embodied in organizational performance assessment instruments. Organizational models of practice and performance, whether implicit or explicit, might lend themselves to exploration through the lens of ‘expertise studies’, a branch of applied psychology. For instance, the task domain specificity

154

The Bloomsbury Companion to Language Industry Studies

associated with the expertise construct is compatible with the task specifications associated with a job description or professional practice model. The emphasis on progression implied by periodic performance assessment is mirrored in the notion of the ‘acquisition of expertise’ and Ericsson and Charness’s idea of achieving ‘consistently superior performance’ (1997: 3). Nevertheless, there are practical problems to be considered if we are to look at, for instance, the professional practice of translation (including localization, post-editing, terminology work, project management or other professional practice roles) in the language industry from the expertise perspective. One problem, for example, is the difficulty of defining any given set of translation or localization tasks as a ‘coherent’ task domain. A coherent domain includes a set of discrete, definable work activities and problem sets that persists sufficiently through time to allow skills and knowledge to be acquired and then applied with growing efficacy by an individual. In the context of today’s language industry, work activities are tending to broaden in scope, rather than becoming more definable and coherent. Another problem is disentangling what part of what one ‘learns’ about a task domain is relevant only to that particular organizational role or job description and what part is transferable to other similar, but not identical, roles. So, for instance, if I develop a high level of translation skill as a translator, how much of that accumulated skill would improve my performance as a post-editor or localizer? Another problematic issue, the one we address in some greater detail here, is how one accommodates the central expertise studies idea of ‘deliberate practice’ in a work context that quite possibly mitigates against the very properties of deliberate practice that make it an effective engine of performance enhancement.

2.  Research focal points Professional practice and its development raise several important issues in organizations, issues that have to be dealt with practically by human resources personnel and department or project managers. In the language industry, when we hire translators, localizers and other practitioners, whether in-house or as independent (freelance) contractors, we must also deal with these concerns. Any approach to professional practice must address the twin issues of the nature and quality of performance. These issues have to be dealt with at hiring, but also again at various stages of the professional’s tenure with the organization.

Professional Translator Development from an Expertise Perspective

155

For instance, at hiring, how does the organization determine whether or not an individual is capable of practising a particular activity – a coherent task domain – at the desired level? This concern intersects, in the area of translation studies, with questions of ‘translation competence’ and ‘competences’, and of how these notions intersect with practical considerations of preparedness and ability to practice. Although the nature of a professional’s initial qualifications generally lies outside the scope of this chapter, one aspect that is germane is that existing knowledge of the tasks required in a job can determine the starting point for accumulating greater expertise. Generally, a written job description document serves as a specification of a coherent task domain and will also lay out, in some greater or lesser detail, the initial competences required. Defining the nature of the job, the domain in which practice or performance occurs, is just an initial step. As individuals practice in the domain, the organization is necessarily concerned with determining in some objective way the quality of performance at any given point in time and, over time, documenting whether or not the performance of a person in a particular position improves. Are there means to determine whether the incumbent in a position is becoming more efficient, more productive or more accurate? These interests generally encompass the arena of ‘performance assessment’, an area of personnel management concerned with the mechanisms and instruments used to describe and capture performance both synchronically and diachronically. In the language industry, specific concerns would be determining what elements of practice to evaluate and how. Is our assessment focus to be primarily on the discrete translation result (product oriented), or should it also include some recognition of process-oriented issues? Within the context of process and product, what are the specific assessment criteria to be? Do we have valid rubrics for assessing target text quality? Do we understand enough about cognitive and social processes of translation to assess ‘process quality’? Given that the environment of corporate translation work is often team oriented, social factors also may become targets of assessment. For instance, how well does a translator communicate and work with others? There is a tendency for assessment to focus on product and process, but, as several translation scholars have recently argued, cognition in translation is also inherently social (Risku and Windhager 2013, Ehrensberger-Dow and Heeb 2016). Whether the target of assessment is product, process or social function, the feasibility, utility and validity of assessment metrics and instruments also become a significant concern.

156

The Bloomsbury Companion to Language Industry Studies

If we can define the task domain, and adequately address the issue of assessment of practice, then we must fully examine the issue of change, particularly progression, but maybe also retrogression, in the quality of practice. Most organizations have expectations that levels of professional practice will not only be maintained but also be improved upon. These expectations are loosely associated with the notion of ‘professional development’, a framework of ideas that coheres around the central notions of maintaining professional status and improving one’s abilities, that is, ‘getting better’ at one’s job. The issue of improvement dogs the employer’s steps from initial hiring through to advancement and promotion and, perhaps, termination. This is precisely why larger organizations generally develop some formal mechanisms to grapple with it. At the intersection of expertise studies and professional practice, we have to ask ourselves how theoretical notions of translation expertise (and translation experts) intersect practically with organizational ideas of the task domain, of the quality of practice and of progression in skill level – as well as the formal mechanisms employed (or not employed) to measure and promote progression. If organizational performance assessments show scant evidence of improved performance, of particular interest here is whether or not expertise studies can help us determine what aspects of the context of work or delineation of the task domain prevented positive change. Conversely, in organizations where professional development seems extraordinarily successful, what expertiserelated factors seem to promote it? We might look to such organizations for discrete examples of how to support professional development and, perhaps, foster expertise. For instance, the investment company Vanguard was identified as a ‘Top Learning and Development Organization’ by Chief Learning Officer magazine because of its dedicated professional development arm, Vanguard University (2016).

2.1.  Deliberate practice, expertise and the expert ‘Professional’ is not a synonym for ‘expert’. An expert is, according to expertise scholars, someone who exhibits ‘consistently superior performance on a specified set of representative tasks for the domain that can be administered to any subject’ (Ericsson and Charness 1997: 3). The discipline of expertise studies proposes that engagement in so-called ‘deliberate practice’ is the primary driver in the acquisition of expertise. Deliberate practice can be defined as ‘regular engagement in specific activities directed at performance enhancement in

Professional Translator Development from an Expertise Perspective

157

a particular domain, where domain is some sort of skilled activity’ (Shreve 2006: 29). The classic deliberate practice model sets out several conditions that the context of practice should meet (Ericsson et al. 1993): 1. Motivation and support: The individual must be motivated to perform the task and to voluntarily expend effort to improve performance. 2. Well-defined tasks: The model presumes that expertise is developed in a coherent task domain. That is to say, we become good at a certain finite set of tasks that are circumscribed and specifiable to a reasonable degree. 3. Appropriate difficulty: At the beginning of learning a new task, the task should be one that is not too difficult and that builds on what one already knows. As learning progresses, tasks should become more challenging, pushing at the boundaries of skill and existing knowledge. Absent a progressive gradation of difficulty, there is no ascending trajectory of expertise – the person performs essentially the same task over and over and progress stalls. The idea of ‘difficulty’ can include any number of factors that pose new cognitive or physical challenges relative to the previously practised skills. For instance, in the context of translation, new terminology, new text types and new translation technology are possibilities. 4. Informative feedback: The existence of feedback is a crucial factor in the success of learning any task. The deliberate practice model calls for both ‘adequate’ and ‘relevant’ feedback, meaning that feedback should be sufficient for the learner to understand areas of performance deficiency and the nature of errors. Further the feedback should be directed specifically to any task elements that need improvement. Such feedback could include the day-to-day comments of a mentor or the results of periodic, accurate performance assessments – or even the detailed comments of a more senior translation editor in a translation workflow. 5. Opportunities for repetition and for the correction of errors: On the basis of feedback, the practitioner needs to be given the opportunity to repeat the task, addressing any errors or deficiencies until the task is sufficiently mastered. The deliberate practice model specifically distinguishes between the simple repetition of tasks and the repetition with feedback and error correction that occurs in the context of deliberate practice. This latter aspect of deliberate practice is effortful and time-consuming – a potential problem in project-driven organizations like language service organizations, where time and schedule present stringent and often intractable project constraints.

158

The Bloomsbury Companion to Language Industry Studies

2.2.  Ill- and well-defined problems Early studies of expertise focused on ‘well-defined’ problems and task domains (for instance, the game of chess). In well-defined problems or tasks, the elements are clear (there are only so many pieces on a chess board and so many squares to which one may move them); the goal of or solution to a problem is unambiguous (achieving checkmate); there are clear constraints on solutions (one cannot violate the clearly stated rules of the game); evaluating the success of a solution is simple and unambiguous (can a king escape capture: yes or no?). Such problems can often be formally or algorithmically represented. On the other hand, as we move out of the arena of activities like chess or mathematics, we encounter more ‘ill-defined’ problems (van den Broek 2007: 4) where, for instance, (a) the nature and extent of problem elements is unclear; (b) the nature of the desired outcome is not clearly definable; (c) there are multiple possible solutions and ways to achieve them (or, worst case scenario, no solutions at all); (d) it is difficult or unclear how to evaluate how well the problem has been solved – criteria for success are vague or indeterminate; (e) there is uncertainty about how to go about solving the problem (strategies) and what constraints there are on applicable strategies (rules, principles and client instructions). Ill-defined problems may require correspondingly more ‘support’ from the workplace: more guidance, more feedback, more practice and more clarification of the principles and constraints that could possibly apply to solutions – but, as we shall see, the indeterminacy inherent in ill-defined tasks can also play a beneficial role if leveraged properly.

2.3.  Routine versus adaptive expertise ‘Routine expertise’ was the object of the earliest expertise studies, and many of the earliest theoretical constructs of expertise arose to explain routine experts. Routine experts get better and better at their tasks through deliberate practice, developing high levels of skill characterized by efficiency (as measured by speed and effort expended) and performance level (measured by quality of outcome). Performance improvements come primarily from cognitive changes, including the development of greater problem understanding, creation of task-related schemata in long-term memory, stronger problem-solving strategies, automation of lower-level operations, proceduralization and a greater ability to understand and think about the task (metacognition, planning and task awareness). Routine experts can become very, very good at solving problems they are familiar with – indeed, this is easy to understand; it is the progressive development

Professional Translator Development from an Expertise Perspective

159

of that familiarity over time that has engendered their expert status. However, where they face difficulty is in dealing with novel situations, situations where elements of the task or problem being solved become unfamiliar, where the constraints or parameters have shifted or where familiar outcomes no longer suffice as templates for solutions. Here we can see a connection to well- and ill-defined problems; routine experts function quite well when the tasks they have mastered remain stable and well defined, but as the task conditions and parameters shift, the activity becomes more ill-defined relative to the existing skill set. Expert performance can falter. Hatano and Inagaki (1986) were the first to distinguish routine expertise from ‘adaptive expertise’. Adaptive experts retain the efficiency and performance improvements of routine experts, but are able to deal more effectively with novelty and ambiguity. Adaptive experts apparently have a deeper understanding of the nature of their task domains and the problems they encounter within it. This implies they possess, for instance, richer and more abstract schemata, more flexible problem-solving heuristics and greater metacognitive abilities. Problem recognition, task planning, solution generation, solution evaluation and strategy selection and adjustment are improved to the point where efficiency in the task and the ability to innovate achieve an optimal balance (Paletz et. al. 2013; Schwartz, Bransford and Sears 2005). Clearly, the more ill-defined or variable the task domain and its problems become, the more advantageous the ability to innovate solutions. What promotes adaptive expertise? According to Hatano and Inagaki (1986), three preconditions have to exist for it to arise. First, the task repertoire being practised cannot be static or stagnant; there has to be appropriate variation in the skill set being practised; this variation promotes cognitive flexibility rather than cognitive fixedness. Second, the context of work (organizational culture) has to be one that values and promotes ‘understanding the system’ – that is, developing a deeper conceptual understanding of the work and its principles. Third, the work should be performed increasingly for intrinsic rather than extrinsic rewards – in other words, the work increasingly becomes its own reward. For some exploration of these issues ‘situated’ in the context of translation, see Muñoz Martín (2014).

2.4.  Individual and collaborative or team expertise Before we turn our attention to some important systemic constraints in the development of expertise, it is important to recognize that while much of the

160

The Bloomsbury Companion to Language Industry Studies

early work in expertise studies applied to individual expertise, it can also be successfully applied to the collaborative work of groups or teams (see Risku, Rogl and Milošević, this volume). Teams can show progression in the ability to perform work efficiently and accurately, to develop routine expertise in members and even exhibit team adaptive expertise (Paletz et. al. 2013). Team expertise implies the development and increased efficacy of a variety of interpersonal skills. For instance, team members need to be able to monitor one another’s progress and understand one another’s roles and skill sets. They need to be able to coordinate tasks effectively and adjust schedules collaboratively, and, of course, they need to communicate effectively (Kozlowski 1998). Of particular importance, supporting the ability to work together effectively as a team, is a robust shared mental model of the nature of the work and its context (Entin and Serfaty 1999). Team adaptive expertise is promoted the same way individual adaptive expertise is, with the additional constraint that team members need to have the opportunity and means to share their skills and knowledge. This is a strand of research that sees both individual and team expertise as socially embedded and as an emergent product of knowledge distributed in social systems. Expertise arises from appropriate participation in ‘networks of expertise’ and communities of practice. Notions of ‘distributed and embedded expertise’ that are central to this research area could be particularly pertinent to project-based environments where work is usually accomplished in teams, as, for example, translation and localization teams (Fenton-O’Creevey and Hutchinson 2010).

2.5.  Constraints on the development of expertise Deliberate practice is the primary driver of expertise. It can lead to routine expertise, and given certain additional elements, adaptive expertise. However, the requisite practice can be difficult to accumulate. Ericsson and his colleagues (1993: 369) outlined three kinds of constraints on deliberate practice: (1) resource constraints; (2) motivational constraints; and (3) effort constraints. Resource constraints refer to parameters of task performance such as sufficient time, availability of relevant equipment or facilities, mentoring and learning opportunities and so on. Motivational constraints address limits on both intrinsic and extrinsic motivation. Deliberate practice is hard and, as many authors have pointed out, most individuals (and teams) do not participate in it ‘spontaneously’ because it is not usually ‘inherently motivating’ (Ericsson,

Professional Translator Development from an Expertise Perspective

161

Krampe and Tesch-Roemer 1993: 368–9). Finally, effort constraints refer to the amount of time it takes to practice and upper limits on how much one can practice on a daily or weekly basis and still sustain the focus and intensity required to benefit from the practice. The so-called ‘10,000-hour rule of expertise’ – an overgeneral characterization of the fact it takes a long time to develop expertise – reflects the influence of effort constraints. In any study of expertise development associated with a particular work environment, we should be able to look not only at what promotes expertise (e.g. support for deliberate practice, providing conditions to encourage innovation and creativity) but also at what inhibits it; the three categories of constraints can provide a starting point to identify organizational inhibitors. In the next section, we take each of the ‘focal points’ of expertise research and examine them in light of what we can learn using professional work in the language industry as an object of study.

3.  Informing research through the language industry The expertise perspective lays out several useful constructs that one might use to train an expertise lens on the language industry and thereby gain some useful research insights. The expertise construct puts forward the principle that translation expertise develops in a particular task domain over an extended period of time and under specific task conditions. That task domain must be stable and defined enough to allow expertise to develop, but not so static or stagnant that novelty and innovation are stifled. This implies, perhaps, a need to foster both routine and adaptive expertise in organizations – although it raises the question, is this a balance to be fostered only in individuals, or can it be possible to balance routine and adaptive expertise in teams? Central to the notion of expertise is the idea of progression: we get better at translation, or post-editing, or terminology work in ways that are measurable and consistent. Expertise is developmental and acquired over time. It progresses from some individually variable initial state of skill performance (e.g. initial translation competence, should one choose to call it that) to successor states that are characterized by increased efficacy. That is, performance becomes faster (e.g. more words translated per hour or day), more accurate (fewer mistranslations), requires less intervention and repair in the downstream workflow (fewer editorial interventions) and is, generally, judged as higher quality on valid translationspecific assessment metrics.

162

The Bloomsbury Companion to Language Industry Studies

Assessment metrics in organizational settings are often inextricably entangled with personnel actions. Compensation increases and other advancements, such as promotion, may be tied to positive movement in these assessments. This can be a problem for contexts like the language industry, where compensation (especially for freelancers) has been stagnant, and increasing the amount of money one makes has come to depend solely on increasing one’s productivity – rather than improving on assessments. Because the ‘reward’ for positive assessment is tied directly to extrinsic motivation (and motivation is a key factor in expertise development), it is very important that assessment instruments are valid, reliable and calibrated to the actual task set being measured – the task set the organization actually assigns to the individual being assessed. We will address the issue of ‘actual’ versus ‘ideal’ task sets more fully later in this chapter. If we accept the findings of expertise studies, and a myriad of empirical studies support the broad applicability of its results, and if a language service provider is interested in the development of expertise in its workforce, then it follows that the performance level of the translation staff could be raised by changing working conditions to promote the progression of skill. An organization should look for opportunities to ‘promote’ expertise by altering the conditions of work to address (a) motivation and support for improving translation performance (e.g. compensation and reward schemes, mentoring and continuing education); (b) task definition; (c) appropriate difficulty (d) informative feedback and (e) opportunities for repetition and correction of errors, as specified above.

3.1.  Ill- and well-defined translation tasks Translation tasks are probably classic exemplars of ill-defined problems. While the gross parameters of translation assignments might remain stable over time – for instance, demands for translations to be accurate in meaning, adequate to purpose and acceptable to target audiences – the specific skopos of any given translation commission necessarily introduces significant variation. There is inherent or potential variation in subject area, text type, client instructions, function and purpose, nature of the target audience and the like. The more these areas of variation are held constant, the more likely that routine expertise can develop. This is at least partially why translators specialize and why language service providers hire subject specialists – and, further, why some companies like to retain translators who ‘know’ their customers, local terminology and specific document style-sheets.

Professional Translator Development from an Expertise Perspective

163

A constantly shifting task set means that the nature of the problem to be solved constantly shifts, and, as a result, the ‘task definition’ becomes amorphous and ill-defined. What set of skills is actually being practised? It may be all ‘translation’ of some kind, yes, but as has been argued in the literature (Shreve 2002, 2006), there may be no ‘generalized’ translation expertise due to the inability to develop the requisite skills to deal with all the possible influencing variables: text type, subject domain, translation task, context and mode of work and translation technology, to name a few. However, even if there is no generalized translation expertise, it could be possible to configure task sets in a deliberate and dynamic way to foster adaptive expertise. Job descriptions (or similar documents) are a common means for specifying the task sets associated with an organizational role – such descriptions detail the elements or components of practice (e.g. ‘what’ needs to be done). To a great extent, a job description is a specification of diagnostic tasks and sequences that define a particular position, differentiating it from others. Such descriptions are the basis for hiring (they determine the requisite skills the successful recruit should have) and they provide the basis for performance review of incumbents. Tyler (2013) calls the job description ‘the mother of all HR processes’, arguing ‘everything from recruitment and training to performance evaluations and compensation all stems from that document’. Muskovitz (2011) details the essential elements of a description: summary, requirements, functions and position information. He argues that the ‘listing of job functions’ element is the most critical: ‘this section provides the basis for most of the employment decisions that are made concerning the employees in this position’. The requirements section is also important, integrating with the listing of duties since it specifies required credentialing, previous experience and the like – an attempt to match pre-existing candidate competences with the demands of a particular set of tasks. Thus, from an expertise point of view, a job description provides some important information by detailing exactly what an individual is being asked to do – namely, the nature and extent of the task domain. In the language industry context, studying job descriptions certainly would be a starting point for understanding the task domain. Now one of the problems with studying the language industry in order to improve our understanding of the way expertise develops in situ, so to speak, is that job descriptions, ubiquitous as they are, may not provide an accurate depiction of the tasks actually performed. They may represent ‘idealized’ characterizations of the tasks or functions associated with a position. This is important, because if expertise develops over the course of ‘actual practice’

164

The Bloomsbury Companion to Language Industry Studies

as opposed to ‘ideal practice’, then the organization’s understanding of the task set increasingly diverges from reality. Correspondingly, the validity of instruments used to measure progression and performance quality will become compromised: they will not be measuring what they purport to measure. This makes it imperative for those who study work in industry settings to be clear in how they have determined the specific elements of the task domain under study. Job descriptions alone can only be a starting point. An issue related to actual practice in the language industry is that such practice is by definition highly situational. Job descriptions usually reflect a synchronic ‘snapshot’ of a task set at a given point in time (generally at hire, but perhaps also at other points in time). Generally, the description doesn’t change unless the position changes – but is this an accurate reflection of how task sets actually evolve? If a person is in a position over a long period of time, many elements of the job are likely to change – for instance, in the language industry, the translation technology and tools available to perform tasks. In the language industry, the technological context of work has been quite dynamic, thus it seems quite important to look at the way we describe the responsibilities and functions of a job in a more dynamic, diachronic way. This makes it all the more imperative to perform not just ethnographic, detailed descriptive studies of actual organizational practice, but to take care to look at how a job’s task set changes over time. See Risku (2017) for a useful discussion of an ethnographic approach to translation and cognition that could be applied to the study of translation expertise and to disentangling the facts of actual versus ideal practice.

3.2.  Routine versus adaptive translators Given that translation tasks are intrinsically highly variable, it is quite easy for the tasks assigned to translators to become ill-defined. The repeated allocation of highly variable task assignments could (unwittingly) work against the development of routine expertise. If expertise development is an overt organizational goal, a more fruitful approach would be to explicitly and systematically control a longer-term sequence of work assignments in a way that builds both routine and adaptive expertise. As mentioned earlier, the goal could be balancing both kinds of expertise in individuals or, alternatively, distributing routine and adaptive expertise differentially across teams or other organizational units. The organization must initially assign tasks that are sufficiently circumscribed (e.g. well defined), so that routine expertise could develop (assuming other conditions of practice are met). For instance,

Professional Translator Development from an Expertise Perspective

165

rather than assigning a more or less random selection of general assignments over multiple domains, text types or translation modes, a project manager or department head would deliberately select assignments targeting specific areas of skill development. This might allow improvement in skill along a number of more ‘specialized’ dimensions. If skill in this work context proceeds to the desired performance level, then organizations could introduce variability and increasing difficulty in the task assignments, being careful to leverage existing knowledge and skills, but introducing enough variation to promote innovation – this at least for certain individuals. Thus, once routine expertise is built, then adaptive expertise becomes the objective. It is also possible, and this is sheer speculation, that an optimal team needs both routine and adaptive experts, so not everyone might be targeted (or suited) for the same kind of development. Now, realistically, this level of control over work assignments may not be feasible – and be at odds with the practical necessities of the schedule- and budget-driven work contexts characteristic of the highly projectized language industry. And, of course, if the organization, apropos to the points made earlier, has not really made attempts to coincide ideal and actual practice, capture the dynamism inherent in actual work or appreciate the utility of controlling work assignments over the longer term, then it is unlikely that it could effectively ‘manage’ work assignments so as to curry routine, much less adaptive, expertise in individuals or teams.

3.3.  Translation team expertise Much translation work in the language industry is performed as part of a larger physical, but very often, virtual team. Thus, language service providers should be invested not only in improving individual performance but also in team performance. If this is a goal, then the implementation of project management becomes a critical element, especially with regard to communication plans, scope documents and other instruments that promote feedback, interpersonal interaction and the sharing of requisite skills. The development of a shared ‘mental model’ of translation work, and of the way a translation project is structured and operates, is a critical element of team expertise. Project managers have to make team expertise an explicit goal and build project infrastructures that support it by providing opportunities for dialogue and collaboration. Research into how successful language industry projects are at building expertise might prove very illuminating and provide useful insight into how project management and expertise development

166

The Bloomsbury Companion to Language Industry Studies

intersect. That research has to proceed from a thorough understanding of how translation teams, including virtual ones, operate (see Rodríguez-Castro 2013).

3.4.  Language industry: Constraints on expertise There are a number of constraints on expertise development, falling generally under the categories, as indicated earlier, of motivation, resources and effort. While the original context of the description of these constraints applied to individuals, in the organizational context there is a constant interplay between what the organization provides to the individual to mitigate these constraints and the organization’s attitude towards the development of individual or team expertise.

3.4.1. Motivation An organization’s compensation and benefit policies are probably the major motivation factors in many organizational work contexts – they are ‘extrinsic motivators’. Extrinsic motivation is based on the desire to obtain external ‘outcomes’ (pay, promotion, fringe benefits) associated with the work. Intrinsic motivation, on the other hand, has its basis in the value of the work itself; the individual values the work, and this appreciation provides an impetus for performance improvement. Intrinsic and extrinsic motivators can interact in a synergistic way, but the organization has to plan well to make this happen (Amabile 1993). In language industry contexts, whether work is in-house or freelance probably plays a significant role in the nature of motivation and on its ability to act as a spur for performance improvement. We also know from research into the differences between routine and adaptive expertise that intrinsic motivation assumes a greater role as the ability to innovate and be creative emerges. Kenneth Thomas (2009) cites feelings of meaningfulness, choice, competence and progress as the most important influences on intrinsic motivation in the workplace. This implies that organizations have to be aware of any factors that could stifle the individual’s appreciation and enjoyment of the work. Studies of extrinsic and intrinsic motivation in the language industry could be very revealing in this regard; here we cite, for instance, the work of Rodríguez-Castro (2016) on translator job satisfaction and those elements of the work context that promote or hinder positive attitudes towards translation jobs. For instance, Rodríguez-Castro points specifically to task autonomy and high levels of feedback as predictors of job satisfaction.

Professional Translator Development from an Expertise Perspective

167

3.4.2. Resources The nature of the work context also most likely affects the nature of the resources available to support performance, with freelancers likely having access to fewer physical organizational resources, but more virtual ones. In the language industry context, resources might include opportunities for mentorship and institutional learning, access to software and reference materials, access to equipment and facilities and so on. In the context of adaptive expertise, access to opportunities to interact in a network of practice with other practitioners is limited for those who do not work on-site – this could also be seen as a resource constraint. Understanding how best to leverage the technology of virtual teams might ease this particular resource constraint to a great degree. However, the single most significant resource constraint in the context of the translation workflow, and, indeed, in all project-driven work contexts, is probably ‘time’. Projects are explicitly schedule driven. ‘Time is money’ in project contexts, and time/schedule management is historically oriented towards reducing the time expended on tasks and on the project overall. This management objective militates against providing the time required for expertise development.

3.4.3. Effort While time resources are probably always under significant pressure in language industry contexts, the same pressures that push against having ‘enough time’ to perform tasks adequately (much less well) also operate against expending time in practice. Deliberate practice includes requirements for feedback, error correction and repetition of tasks. Organizationally, supporting a ‘practice enriched’ work environment would also require allowing the time needed for the extra effort that would be involved managerially in scheduling work, assessing task performance in a meaningful way, providing relevant feedback and so on. From an individual perspective, deliberate practice, if it is to be effective, is strenuous. It is ‘effortful’ in a cognitive way, requiring attention and focus for sustained periods (Omahen 2009: 1272): ‘From virtuoso musicians to elite athletes, top performers spend 4 to 6 hours daily in intensely focused, deliberate practice. Above this level, concentration and performance levels drop off and diminishing returns are received from time invested.’ Of all the constraints, this one would seem hardest to meet in scheduledriven work contexts. Performance-enhancement objectives will likely be in competition with productivity enhancement and profit objectives – since some of the effort of the individual engaged in deliberate practice (and the

168

The Bloomsbury Companion to Language Industry Studies

organization supporting it) could be considered an investment in the future and not immediately profitable or productive.

3.5.  Organizations and the problem of assessment The basis for providing ‘motivators’ – especially extrinsic ones – is generally associated with the organization’s appraisal of an individual translator’s level of skill/quality of performance and how that plays out in connection with perceptions of value in relation to general organizational goals or specific project goals. The same can be said about the level of resources made available to develop individual or team performance, or the amount of time and effort the organization is willing to invest in practice that is not ‘immediately productive’. There are certain expectations of performance at hire and at various milestones during an individual’s tenure in an organization. Generally, an organization will have some sort of performance standard, an institutional tool that integrates some specification of the critical elements associated with a task with a set of performance thresholds, requirements or expectations that have to be met at any given performance review. A good example of a performance standard in the language industry is contained in the National Standard Guide for Community Interpreting Services (HIN 2007). The document lays out in some detail the main responsibilities of community interpreters, but, crucially, specifies ‘forty-seven standards of practice interconnected with eight ethical principles’. These are explained in enough detail that they could be used as a basis for assessment and measurement of practice. In an ideal organizational world, performance on each critical task would be ‘objective, measurable, realistic, and stated clearly in writing’, with each element’s standard ‘written in terms of specific measures that will be used to appraise performance’ (OPM 2017). In the context of translator performance, the specific arena of practice we are concerned with here, there is no broadly accepted performance standard. Outside of the health interpreting community, there seem to have been precious few attempts to create a comprehensive performance standard – and perhaps it is not feasible to do so. An interesting recent discussion by Koby and his colleagues (2014) illuminates the scope of the problem in our profession. In ruminating over the results of a wide-ranging discussion of translation quality (occurring over several manuscripts), the authors conclude: ‘Given our own lack of consensus on defining translation in the first article and defining quality in the second

Professional Translator Development from an Expertise Perspective

169

article, readers should not be surprised that we have not reached consensus on a definition of translation quality’ (Koby et. al.: 416). Further complicating the issue is this: a job description details the duties associated with an organizational role; it details tasks that must be accomplished. However, tasks outcomes in translation are target texts – ‘products’. In language industry contexts, it may be that too often we assess the success of performance by assessing products. Thus, there arises a process-product conundrum. We specify what the translator ‘should do’, but we cannot agree on whether they have ‘done it’ as evidenced by the physical evidence of the target text. This has a lot to do with our inability to assess translation quality much beyond the recognition and detailing of errors and textual shortcomings. Assessment of product is not unimportant, and indeed the language service provider cannot ignore it, but other assessments that get to the heart of the efficiency and efficacy of translation or localization process are also needed (see Läubli 2014 and Martinez-Gomez et al. 2014 for relevant research on how these sorts of assessments might be accomplished). As a translator develops in expertise, assuming they have the support to do so, at a certain level of skill, basic errors and textual flaws begin to disappear. If we want to track the trajectory of expertise, we have to know not just what is ‘wrong’ with a text, but also, after a certain point, what is ‘right’ with it. Some aspects of performance improvement do not leave a trace in the product – for instance, time on task or effort expenditure – and will not emerge in performance assessments unless the organization looks for them. Fortunately, it is possible with many of today’s translation technology tools to capture some of this data. From a team perspective, process improvements could also include greater dissemination of a shared model of work, improved communication and so on. Performance assessment instruments need to address both product and process improvement for individuals and teams. Comprehensive quality approaches, harking back to Koby’s discussion, may be impossible or too difficult to promulgate – and this may be true because the nature of the task is impossibly heterogeneous. If so, then it becomes necessary to develop standards that are purely local in nature. In other words, just as a job description is specific to a position, the standards and assessments are necessarily local to organizations. But there is also the question of whether the difficulty of this task, even at the local organizational level, is too great. Can we produce performance standards with relevant measures and assessments that reflect the realities of expertise development? Are they within the scope of most language service providers’

170

The Bloomsbury Companion to Language Industry Studies

capabilities – or within the bounds of their organizational interest? This leaves us with the possible conclusion that in the absence of adoptable practice-wide standards, there may be no real incentive for producing the documents and measures needed locally. Professional development is tied up with the notion of ‘getting better’ at one’s job. This should entail getting better at the discrete duties the job is composed of – and that a practice standard and assessment system should document and evaluate. The results of assessment should track a trajectory. We know, given the so-called effort constraint, that the road to expertise takes time and sustained effort. In stable, supportive organizational environments, the long times it takes to develop expertise used to build up a kind of ‘reservoir’ of knowledge and skills associated with long-term employees: so-called ‘institutional wisdom’. This may be difficult to foster in professional contexts where both the objects (domains) of the task change quickly and where great components of the task specification (for instance, the technological parts) are rapidly changing. Finally, of course, there is the problem that in such contexts, it may not be ‘worth the organization’s while’ to concern itself unduly with long-term professional development. The effort to understand and support practice, to motivate it and nurture it might not be justified from a return on investment (ROI) perspective.

3.6.  What can we learn – the bottom line? In the introduction, we assumed that the language industry has developed at least implicit professional practice and performance models. Research into these models should shed some light on how the industry currently systematizes professional practice (if it does so at all) and how industry conceptions of tasks, as elements of job descriptions and performance expectations, emerge into performance assessment models. We may find robust practice and performance models whose practical application can unveil how important elements of expertise play out in the workplace. On the other hand, we may find that expertise models presume certain working conditions (e.g. for deliberate practice) that are vanishingly rare in practical work situations. For instance, research into performance models used in the industry should focus minimally on whether or not the underlying conditions theoretical models presume for expertise progression are present in the workplace or are accounted for in workplace practice models. Their presence or absence may provide important information about the intersection of theoretical understandings of expertise and its development with the practical applications of performance

Professional Translator Development from an Expertise Perspective

171

assessment. So, while we may want experts in our employ, it may, quite practically, be difficult to provide them with the environment in which they might naturally arise – however benign our personnel management objectives are.

4.  Informing the industry through research Studying the language industry from an expertise perspective could produce case studies where we find that, indeed, individual and team expertise can be motivated and supported. Studying the specific conditions of practice of those exemplars, including the extant set of constraints on motivation, resources and effort, could develop a specification of the kinds of workplace policies that might promote (or, alternatively, inhibit) the progressive development of translation knowledge and skills. One way that the expertise model could contribute to expertise development in the language industry is by emphasizing organizational understanding of the ‘actual nature’ of any individual or team’s work and ensuring that the understanding is updated as the nature of the work changes over time. Documentation of tasks and task sets has to be accurate and dynamic if it is to provide the basis for understanding quality of performance and its progression. An expertise perspective could help organizations distinguish between idealized conceptions of work (as, for instance, in job descriptions and personnel documents) and those based on direct observation and documentation of the work as it actually occurs in context. The lens of expertise should also provide some greater insight into performance assessment. First, it is clear that valid assessments of the quality of performance – and the instruments we used to provide those assessments – are highly dependent on the accuracy of the organization’s understanding of the actual nature of work. Performance standards and assessment instruments succeed or fail to a great degree depending on whether they address the ‘critical elements’ of the tasks people perform and whether they can accurately detect progression. The deliberate practice model underscores the key role that the context of work or practice plays. The expertise perspective provides a means of evaluating workplaces as ‘expertise-friendly’ or not. Does the context motivate or support the kind of work engagement that promotes performance improvement? Is the task set well defined (well understood and stable) and does its level of difficulty and scope change appropriately over time (e.g. to promote both routine and

172

The Bloomsbury Companion to Language Industry Studies

adaptive expertise)? Does the communication structure of the organization and/or project promote relevant feedback – specific enough and timely enough to produce improvement? The right kind of practice over the requisite amount of time can produce beneficial cognitive shifts in individuals – improving the quality and consistency of their work. Improvements not only are detectable in the quality of the product – target texts for instance – but arise in the mental and social processes of individuals and teams. A better understanding of deliberate practice and the changes it precipitates should inform performance standards and instruments – detailing the nature of improvements that should be objects of organizational interest. Practitioners can become progressively more adept not just at well defined, but also at ill-defined problems. Expertise studies gives some insight into not only how organizations can structure the responsibilities associated with work assignments but also how we can vary those assignments systematically over time not only to introduce greater difficulty but also to promote innovation and cognitive flexibility. In this way we can promote not just routine, but adaptive expertise. There are ways to promote both individual and team expertise by altering the conditions of individual and team work. In team contexts, appropriate recognition of performance development goals should be reflected in project planning, in work assignments and in other relevant aspects of project management. That said, even if the context of work promotes expertise, there are constraints on its development in the form of motivation, resources and effort. Language service providers, if they buy into the value of expertise development (and this is by no means a given), can use the ‘constraint model’ to understand how they can manipulate motivators, resources and time (for engaging in deliberate practice) to promote expertise. This may be quite difficult. In the language industry, one of the most prevalent work models involves freelancers working either independently or in virtual teams. Freelancers will be especially impacted by expertise constraints, and employers wanting to develop this part of their workforce will have to pay particular attention to their lack of control over the freelancer’s work context.

5.  Concluding remarks The expertise model, more than anything else, presents us with a framework for looking at how consistent, sustainable improvement in the quality of work

Professional Translator Development from an Expertise Perspective

173

emerges. Growing expertise is accompanied by improved work outcomes or products and more efficient work processes. Developing expertise seems like a so-called ‘no-brainer’ as an organizational objective. However, just because promoting expertise could measurably improve translations and the efficiency of individuals and teams does not mean that it makes organizational sense to do so. As mentioned earlier, expertise has to be seen as a kind of organizational investment and, as such, a cost-benefit analysis has to apply. Does the cost of studying task requirements, addressing workplace conditions, providing additional feedback mechanisms, of gradating and controlling task progression and all of the other aspects of creating an expertise-friendly work environment justify the expected beneficial return? There might be some important evidence to be garnered from looking at turnover rates in the language industry among translators and especially project managers. Short (from an expertise point of view) professional life cycles likely point to underlying economic and work environment factors that operate to obstruct expertise. Some people will drop out of the language workforce entirely, while others will just switch employers. While it is possible to develop expertise over the course of tenure with multiple employers – it certainly makes it more difficult due to the instability and variation in the nature of the work. Further, there may be, at least in the language industry as it is now (and this maybe has as much to do with clients as with service providers) a fundamental contradiction between the ideal of becoming ‘consistently superior in the task domain’ and the goals of profit-driven organizations. One of the most important questions is, in fact, whether or not there is any practical utility to applying notions of expertise and deliberate practice to organizational contexts in the language industry. The expertise notion of ‘expert’ may or may not be useful. The notion presumes an endpoint of performance whose achievement may yield little practical benefit to the organization in terms of profit, productivity or quality. Engendering a few experts in an organization may be of less value than supporting a virtual workplace full of productive professionals generating texts that, while not ‘consistently superior’, are good enough. Are the requisites for effective deliberate practice achievable in the language industry? And, even if they are achievable, would the projected benefits of such practice warrant the likely costs? Also, the time frames required for achieving the ‘consistently superior performance’ associated with expertise may be too extended. In many organizational contexts, 10,000 hours is a fairly long tenure. It is about two and

174

The Bloomsbury Companion to Language Industry Studies

a half years of sustained effortful practice every day for five hours. Two and a half years seems short – but if one can schedule ‘sustained effortful practice’ only a few hours a week, then it takes much, much longer to develop expertise. Even if one has that longevity in a position, has the task domain remained constant during that time? In an industry dominated by change and founded on the extensive use of contracted freelancers – whose working conditions are notoriously variable – when would an effective expertise acquisition scenario be likely to play out? It may be that the ultimate contribution of expertise theory to practical organizational professional development schemes is a tangential one. There may be elements that it would be useful to borrow – for example, some elements of the deliberate practice model to spur performance progression. But it may also be true that there is no strong affinity between the expertise framework and the practical constraints of work in the language industry, and, consequently, there may be no short-term organizational impetus to adopt it. Edwards Deming (1986: 269) once wrote ‘pursuit of the quarterly dividend and short-term profit defeat constancy of purpose’. Deming’s point is that organizations should also look to the longer term, focusing on business longevity and constant improvement, rather than myopically on just next quarter’s bottom line. Short-term thinking in corporations has some significant drawbacks (Sampson 2016): When firms focus on the short term, those firms steer profits to shareholders immediately instead of spending money to improve productivity, the greatest driver of economic growth for both companies and our economy. They spend less on research and development for the next great products and services, less on capital spending to improve manufacturing efficiency, less on employee training, and less on environmental and community stewardship. It’s fair to say that a short-term perspective has the potential to undermine the traditional growth engines of the American economy and bankrupt our future.

If information-based organizations in the knowledge economy, such as language service providers in today’s language industry, would begin to plan over the longer term, it would be obvious that their primary asset is the combined expertise of the knowledge workers they employ. Nurturing and developing that expertise should be considered an investment in the future, even if not immediately profitable. Investment in expertise is the knowledge economy’s equivalent of capital spending – it is the only path to improve productivity, quality and innovation in language services.

Professional Translator Development from an Expertise Perspective

175

References Amabile, T. (1993), ‘Motivational Synergy: Toward New Conceptualizations of Intrinsic and Extrinsic Motivation in the Workplace’, Human Resource Management Review, 3 (3): 185–201. Chief Learning Officer (2016), ‘Vanguard Devotes Year to Learning Innovation’. Available online: https://www.clomedia.com/2016/05/16/vanguard-devotes-year-tolearning-innovation/ (accessed 10 August 2017). Deming, W. E. (1986), Out of Crisis, Cambridge: Massachusetts Institute of Technology. Ehrensberger-Dow, M. and A. Hunziker Heeb (2016). ‘Investigating the Ergonomics of a Technologized Translation Workplace’, in R. Muñoz Martín (ed.), Reembedding Translation Process Research, 69–88, Amsterdam: John Benjamins. Entin, E. E. and D. Serfaty (1999), ‘Adaptive Team Coordination’, Human Factors, 41: 312–25. Ericsson, K. A. and N. Charness (1997), ‘Cognitive and Developmental Factors in Expert Performance’, in P. Feltovich, K. M. Ford and R. R. Hoffman (eds), Expertise in Context: Human and Machine, 3–41, Cambridge, MA: MIT Press. Ericsson, K. A., R. Krampe and C. Tesch-Roemer (1993), ‘The Role of Deliberate Practice in the Acquisition of Expert Performance’, Psychological Review, 100: 363–406. Fenton-O’Creevy, M. and S. Hutchinson (2010), ‘Building the Foundations of Professional Expertise: Creating a Dialectic Between Work and Formal Learning’, Learning and Teaching in Higher Education (LATHE), 4 (1): 69–90. Hatano, G. and K. Inagaki (1986), ‘Two Courses of Expertise’, in H. Stevenson, H. Azuma and K. Hakuta (eds), Child Development and Education in Japan, 262–72, New York: Freeman. HIN (2007), National Standard Guide for Community Interpreting Services, Health Interpretation Network. Available online: http://www.multi-languages.com/ materials/National_Standard_Guide_for_Community_Interpreting_Services.pdf (accessed 3 August 2017). Koby, G., P. Fields, D. Hague, A. Lommel, and A. Melby (2014), ‘Defining Translation Quality’, Revista Tradumàtica: Tecnologies de la Traducció, 12: 413–20. Kozlowski, S. W. J. (1998), ‘Training and Developing Adaptive Teams: Theory, Principles, and Research’, in J. A. Cannon-Bowers and E. Salas (eds), Decision Making under Stress: Implications for Training and Simulation, 115–53, Washington, DC: APA Books. Läubli, S. (2014), Statistical Modeling of Human Translation Processes, MA diss., University of Edinburgh, Edinburgh. Martinez-Gomez, P., A. Minocha, J. Huang, M. Carl, S. Bangalorec and A. Aizawa (2014), ‘Recognition of Translator Expertise using Sequences of Fixations and Keystrokes’, Symposium on Eye-Tracking Research and Applications.

176

The Bloomsbury Companion to Language Industry Studies

Available online: https://www.researchgate.net/publication/259450660_ Recognition_of_Translator_Expertise_using_Sequences_of_Fixations_and_ Keystrokes (accessed 11 July 2018). Muñoz Martín, R. (2014), ‘Situating Translation Expertise: A Review with a Sketch of a Construct’, in J. Schwieter and A. Ferreira (eds), The Development of Translation Competence, 2–56, Newcastle: Cambridge Scholars. Muskovitz, M. J. (2011), ‘The Importance of Job Descriptions’, The National Law Review. Available online: https://www.natlawreview.com/article/importance-jobdescriptions (accessed 15 July 2017). Omahen, D. A. (2009), ‘The 10,000-hour Rule and Residency Training’, CMAJ, 180 (12): 1272. OPM (2017), A Handbook for Measuring Employee Performance: United States Office of Personnel Management, March 2017. Available online: https://www.opm.gov/policydata-oversight/performance-management/measuring/employee_performance_ handbook.pdf. Paletz, S., K. Kim, C. Schunn, I. Tollinger and A. Vera (2013), ‘Reuse and Recycle: The Development of Adaptive Expertise, Routine Expertise, and Novelty in a Large Research Team’, Applied Cognitive Psychology, 27 (4): 415–28. Risku, H. and F. Windhager (2013), ‘Extended Translation. A Socio-cognitive Research Agenda’, Target, 25 (1): 33–45. Risku, H. (2017), ‘Ethnographies of Translation and Situated Cognition’, in J. Schwieter and A. Ferreira (eds), The Handbook of Translation and Cognition, 290–310, Hoboken: Wiley-Blackwell. Rodríguez-Castro, M. (2013), ‘The Project Manager and Virtual Translation Teams: Critical Factors’, Translation Spaces, 2 (1): 37–62. Rodríguez-Castro, M. (2016), ‘Intrinsic and Extrinsic Sources of Translator Satisfaction: An Empirical Study’, Entreculturas, 7–8: 195–229. Sampson, R. C. (2016), ‘Short-term Thinking in Corporate America is Strangling the Economy’, Vox. Available online: https://www.vox.com/the-bigidea/2016/10/3/13141852/short-term-capitalism-clinton-economics (accessed 10 July 2018). Schwartz, D. L., J. D. Bransford and D. Sears (2005), ‘Efficiency and Innovation in Transfer’, in J. Mestre (ed.), Transfer of Learning from a Modern Multidisciplinary Perspective, 1–51, Charlotte, NC: Information Age Publishing. Shreve, G. M. (2002), ‘Knowing Translation: Cognitive and Experiential Aspects of Translation Expertise from the Perspective of Expertise Studies’, in A. Riccardi (ed.), Translation Studies: Perspectives on an Emerging Discipline, 150–71, Cambridge: Cambridge University Press. Shreve, G. M. (2006), ‘The Deliberate Practice: Translation and Expertise’, Journal of Translation Studies, 9 (1): 27–42.

Professional Translator Development from an Expertise Perspective

177

Thomas, K. (2009), ‘Technical Brief for the Work Engagement Profile’, Psychometrics. Available online: https://www.psychometrics.com/wp-content/uploads/2015/08/ wep_tech_brief.pdf (accessed 10 July 2018). Tyler, K. (2013), ‘Job Worth Doing: Update Descriptions’, HR Magazine. Available online: https://www.shrm.org/hr-today/news/hr-magazine/pages/0113-jobdescriptions.aspx (accessed 20 June 2017). van den Broek, T. (2007), ‘How Experts Reason During Modeling An Ill-defined Task: An Exploratory Study’, MA diss., University of Twente, Enschede.

178

9

Training and pedagogical implications Catherine Way

1. Introduction Training language services professionals, and how they are trained (pedagogy), are the key to the industry’s future. In this chapter we present a wide variety of basic concepts and factors affecting training1 and pedagogy for language industry studies, with particular reference to translation studies (TS). Central to this topic are the following key concepts in the field. 1. Translation pedagogy: What are the main challenges for translation pedagogy? 2. Translator training or translator education: Faced with the vocational versus academic dichotomy: What should higher education (HE) institutions include in their translation courses? What does the industry expect them to teach? 3. Curriculum design, industry requirements and external constraints: How much freedom are HE institutions given to design optimal courses and with how much flexibility? Should graduates have a basic grounding or be prepared for specific profiles? Should they prepare graduates for the local, national or international market? What can the industry contribute to training and continuous professional development (CPD)? 4. Translation competence or translator competence: What is translation or translator competence? 5. Assessment/evaluation and Quality Assessment (QA): Can HE institutions, while complying with academic assessment criteria, also provide training in evaluating translations and QA? 6. Translation, translators and self-concept: What does the future hold for translation? How does translator self-concept shape the profession?

180

The Bloomsbury Companion to Language Industry Studies

1.1.  Translation pedagogy Pedagogy is the discipline which encompasses the theory and practice of teaching and education. Over the last twenty years, translation pedagogy has drawn upon theories of education and of TS, embracing many other fields (psychology, sociology, technology, etc.) to emerge as a dynamic field. Piotrowska and Tyupa (2014) offer an overview of translation pedagogy, reflecting a wide range of pedagogical aspects, and some of the key issues and challenges for the field are addressed below: the vocational/academic dichotomy, curriculum design, translator competence (TC), assessment and translator self-concept. Obviously, how trainers teach, including student–teacher interaction and the classroom/ instructional environment, and how they evaluate are fundamental elements of pedagogy. More recently, translation pedagogy has addressed how course content and objectives can be grounded in significant contexts by promoting learning strategies that encourage a cognitive learning process. This reflects the dynamism of the field, which has evolved from Kiraly’s socio-constructivist approach (2000) to a postpositivist approach where emergence is key (Kiraly 2015). Many of the questions raised in this chapter all impact on, and have implications for, translation pedagogy, as we will see below and in the research described in Section 2.

1.2.  Translator training or translator education: The vocational/academic dichotomy Education, particularly as the mission of HE, aims to prepare critical, responsible, creative citizens capable of solving new problems and of constructing new knowledge throughout their lives as they have learnt to learn (often referred to as whole person education) in line with the World Declaration on Higher Education for the Twenty-First Century: Vision and Action and Framework for Priority Action for Change and Development in Higher Education (UNESCO 1998). While the preparation of translators has progressed through different stages at different rates throughout the world, it would be true to say that a major change can be attributed to the shift, particularly at universities, from more academic studies to more vocational studies (as found in other HE institutions, such as the Fachhochschulen or Universities of Applied Sciences in the German-speaking world and the polytechnics found elsewhere) from the mid-twentieth century onwards.2 This shift is obviously connected to increasing student demands

Training and Pedagogical Implications

181

for courses leading to employability and to the transformation of education into a commodity. Several authors have discussed this question (Kiraly 2000; González-Davies 2004; Bernardini 2004: 19–20; Kearns 2006, 2008; Calvo Encinas 2011) and have examined the distinction between the two, concluding that training is the process of accumulating chunks of knowledge in a specific field (such as language learning), whereas education has a much wider scope. Both terms are used loosely, however, with a recent tendency to revert to using ‘translator education’ in HE institutions, as suggested by the World Economic Forum on the need to teach a broad range of skillsets (World Economic Forum 2017). Despite being aware of the distinction between training and education, HE has been operating under greater economic restrictions recently while struggling to comply with greater productivity requirements and the constraints imposed by impact factors and rankings, which limit its ability to implement changes in courses. HE institutions therefore continue to face the dilemma between preparing whole, critical thinking citizens and providing professional training to meet industry demands. To date, the tendency seems to have been to provide a general grounding at undergraduate level and specialized training at postgraduate (Master) level. As Mossop (2003: 20) has indicated: In my view, the function of a translation school is not to train students for specific existing slots in the language industry, but to give them certain general abilities that they will then be able to apply to whatever slots may exist 5, 10, 15 or 25 years from now. In other words, I think university-based translation schools must uphold the traditional distinction between education and training. They must resist the insistent demands of industry for graduates ready to produce top-notch translations in this or that specialized field at high speed using the latest computer tools. The place for training is the practicum and the professional development workshop.

If undergraduates are uncertain of the specific career path they wish to pursue in translation, it is extremely unfair, and economically unfeasible, to expect HE institutions to provide training in all the possible profiles of the language industry. Pym (2009) discusses this point and disagrees with the widely held myth that university courses focus on theory rather than practice and professional skills or that translator trainers have no professional experience and are out of touch with market requirements (see Section 2).

182

The Bloomsbury Companion to Language Industry Studies

1.3.  Curriculum design, industry requirements and external constraints It is probably true to say that most formal translator training is largely undertaken by HE institutions throughout the world. The Translator Training Observatory List3 of institutions offering translation courses provided by the Intercultural Studies Group of the University Rovira I Virgili, in conjunction with the European Society for Translation Studies (EST) and the International Federation of Translators (FIT), lists 67 countries and some 433 institutions. The constant creation of new courses means that the list can never be exhaustive (Yemen4, for example, does not appear, but already has some translation programmes and is currently designing the curriculum for its first public translation degree). Undergraduate and postgraduate courses are created with a curriculum which, ideally, should be the result of a dynamic process of identifying needs, developing curricular goals and objectives, planning and organizing, designing, implementation, review, evaluation and reorganization by all the stakeholders involved. In reality, political, social, economic and cultural constraints, combined with lesser or greater flexibility to introduce changes to meet social demands, determine the curriculum (Kelly 2005: 61–3, 2008, 2017; Calvo Encinas 2010, 2011; Chouc and Calvo Encinas 2011). Academics, however, have become quite adept at designing generic course outlines to comply with administrative requirements which allow them to modify the content as theory, practice and technology advance. Political decisions have far-reaching effects, as witnessed by the creation of the European Higher Education Area (EHEA) and the Bologna Process5 in Europe, modifying course length and structures and encouraging competence based, studentcentred learning.6 Although not binding, the Dublin Descriptors, which were developed by the Joint Quality Initiative, are suggested cycle descriptors for the framework for qualifications of the EHEA. They indicate generic outlines of typical expectations of achievements and abilities for the end of each Bologna cycle.7 Constant reforms of education systems also often influence whether practicums and professional workshops are found at undergraduate or postgraduate level. Likewise, many countries have developed plans, such as the EU Horizon 20208 or the Outline of National Plan for Medium and Long-term Education Reform and Development (2010–2020)9 in China, which affect not only courses but also the scope of research. Inevitably, educational reforms and developments in theory and the industry take time to be implemented and rely heavily on the funding being available to provide them. In a related context, Pym has expressed concerns about economic factors affecting translator education,10

Training and Pedagogical Implications

183

voicing preoccupation about a growing tendency to include more ‘languageneutral credits’ in postgraduate courses, which are economically more profitable for institutions but which offer trainees less language-specific training than courses where acquiring language-pair-specific translation skills imply greater cost (Torres Simón and Pym 2017). Many HE institutions would agree that their primary objective is to teach students to translate, although most also include as many other professional skills required by the market as possible. For Mossop (2003: 20) the choice is clear: So what are the general abilities to be taught at school? They are the abilities which take a long time to learn: text interpretation, composition of a coherent, readable and audience-tailored draft translation, research and checking, correcting. But, nowadays one constantly hears that what students really need are skills in document management, software localization, desktop publishing and the like, I say, nonsense. If you can’t translate with pencil and paper, then you can’t translate with the latest information technology.

Translator training can be found in many forms – as formal education in undergraduate and postgraduate courses, in shorter courses designed to prepare translators for official certification by public institutions and other private enterprises, short professional courses, seminars by professional associations, in-house training, CPD, internships or mentoring. Little organized data is available on the training offered by private enterprises or professional associations, although browsing the web provides discrete information. Internships may take place in institutions or be organized between academic institutions and private companies (see, for example, Kiraly and Hoffmann 2016: 67–88). Mentoring scarcely appears at all in literature or on the web, although it is mentioned by some professional associations such as the Alta Emerging Translator Mentorships,11 offering the possibility of connecting emerging translators and experienced translators to work on a one-year project, or ELIA Exchange and the EGPS12 (Astley and Torres Hostench 2017). Nevertheless, some attention has been paid recently to internships and mentoring (Kiraly and Piotrowska 2014), as we will see in more detail in Sections 3 and 4.

1.4.  Translation competence or translator competence A core concept in translator training is translation competence or translator competence (TC), also referred to by TAUS (2018: 17) as talents. Defined variously

184

The Bloomsbury Companion to Language Industry Studies

throughout TS literature as the knowledge, skills or abilities needed to translate proficiently this concept has generated numerous models, particularly over the last twenty years. An interesting comparison of four of these models (Neubert 2000; Kelly 2005, 2007; PACTE,13 Hurtado Albir 2017; Pym 2003) can be found in Hague, Melby and Zheng (2011) and Kiraly and Hoffmann (2016: 67–88), while other models (EMT (2009);14 Göpferich 2009) are discussed by Kiraly (2015). Koby and Melby (2013: 189–99) provide an excellent review of the development of TC in TS and of the numerous models available in their search to establish validity in translator certification examinations.15 The question of the use of the terms ‘translation competence’ or ‘translator competence’ has been raised by Kiraly (2000: 13), who defines it as the ability to produce a good-quality target text or translation in a more traditional sense, in contrast to TC, which is understood as ‘joining a number of new communities such as the group of educated users of several languages, those conversant in specialized technical fields, and proficient users of traditional tools and new technologies for professional interlingual communication purposes’ (Kiraly 2000: 13), in other words, language service provision. Increasingly the two concepts are being used as Biel (2011: 164) defines them: ‘Translation competence is the ability to translate to the required standard while translator competence covers skills required to function as a professional on the market.’ The second definition is an umbrella term encompassing the first, so throughout this chapter we will refer to translator competence.

1.5.  Assessment, evaluation and quality assessment Another key concept, and often a bone of contention, is the matter of assessment, evaluation or quality assessment (QA) and translation quality assessment (TQA). These terms, again, are often used loosely (Brunette 2000). Assessment and evaluation are most frequently found in translator education and training in varying forms. As HE institutions strive to facilitate students reaching their course objectives, they must also provide academically measurable results (Kelly 2005: 130–49). Common forms of assessment found in university education are diagnostic assessment – used to ascertain student expectations, attitudes and levels prior to a course; formative assessment – used to improve individual performance, monitor the acquisition of competences and identify areas for improvement and individual learning needs by evaluating the learning process;

Training and Pedagogical Implications

185

and summative assessment – commonly used to provide benchmarks or grades by substantiating compliance with required standards or levels or to accredit performance. Orlando (2012), however, prefers to use formative evaluation as assessment referring to different stages of academic training, summative evaluation for the results of a period of training and normative evaluation to refer to the assessment that uses the industry’s norms in an attempt to accommodate both internal and external requirements. In this way, he combines a product-oriented and process-oriented approach. QA and TQA, on the other hand, are used more frequently in industry settings to appraise different steps in the translation process or as ‘systems and processes used to help create or maintain quality’ (Saldanha and O’Brien 2013: 95). In reality, none of these are mutually exclusive and can be used separately or concurrently at different points in training. Trainers are also concerned with the quality of their training. QA research has been undertaken to assess the degree of satisfaction of graduates with their training, where they highlight strengths and weaknesses that lead to further improvement based on their professional needs (Vigier Moreno 2011). Many HE institutions also incorporate QA surveys not only on graduate satisfaction but also on employer satisfaction with their graduates. Nevertheless, there is always room for improvement and much remains to be done. The question of judging translations is complex and highly context dependent as the array of possible clients, fields, translation modes and expectations are innumerable. Inevitably this leads us to the question of professional accreditation or certification. Again, both concepts are used interchangeably, although as Koby and Melby (2013) point out, according to ISO 17000:2004, certification applies to persons while accreditation applies to assessment bodies. Interestingly, the Australian National Accreditation Authority for Translators and Interpreters Ltd. (NAATI)16 has moved from accreditation to certification in January 2018, presumably to comply with this standard. Professional associations and government institutions around the world authorize or validate translators and interpreters to perform certain tasks (official/sworn translation) or to guarantee national standards and promote professional development. A common denominator in both academic and professional literature globally is dissatisfaction with the testing, which is often considered remote from professional practice and where evaluation is often based on the quantification of errors. We will return to this point in Section 4.

186

The Bloomsbury Companion to Language Industry Studies

1.6.  Translation, translators and self-concept Besides the academic/vocational dichotomy, the real challenge facing translator training is the concept of ‘translation’ itself and, consequently, the self-concept of what a translator is. Kiraly (1995: 100) suggested: The translator’s self-concept is a mental construct that serves as the interface between the translator’s social and psychological worlds. The self-concept includes a sense of the purpose of the translation, an awareness of the information requirements of the translation task, a self-evaluation of [one’s own] capability to fulfill the task, and a related capacity to monitor and evaluate translation products for adequacy and appropriateness.

Bolaños-Medina (2016: 59) has included self-concept within the broader field of translation psychology as the study of translators as complex individuals, which includes all of the underlying emotional, cognitive, behavioural and social factors involved, and their interaction with the translator’s professional environment and with other agents who concur in the translation process. Translator selfconcept, then, relates to how translators see themselves in society and how they engage with other agents when providing translation services. The translation profession seems to be facing an identity crisis, caught between the more traditional view of translation and the demands of the language industry, particularly with regard to the technological changes affecting work practices and settings (Ehrensberger-Dow and Massey 2017). Given the technological challenges and current requirements of the language services industry, the truth is that translation must really reinvent itself or be faced with obsolescence. As the TAUS Industry Leaders Forum (2018:17) has suggested, ‘therefore, translation needs to be redefined and the role of the linguist should be repositioned’. A similar case in point is library science, more recently known as library and information science. The relatively limited professional opportunities for librarians or archivists have mushroomed: By taking on board the rapid development of information technology, they have managed to transform themselves into information brokers with countless new profiles such as knowledge management specialists, consultants or information architects in both the public and private sectors. While Obenaus (1995) had already suggested that legal translators are information brokers, Hague and Melby have also recently suggested a new niche for translators in ‘language services advisement’17 at the FIT 2017 conference.

Training and Pedagogical Implications

187

At this pivotal point, perhaps translators, and translator trainers, should consider their role as intercultural, interlingual information brokers and consultants in an attempt to transmit an image which does justice to all the competences they possess and the services they can provide. Translator education, then, can provide TC, critical thinking and, particularly, the ability to continue developing TC and expertise and to monitor their own performance as translators in multiple roles (Way 2008:100). As Kelly (2005: 34–5, 2007:136) has pointed out, the fact that TC fully matches the highly prized generic competences described as requisites in the Tuning Project18 is an invaluable asset when complying with the language industry’s needs.

2.  Research focal points Research into translator training and pedagogy is booming. As mentioned above, the profession, TS and translator training stand at a crossroads where all the stakeholders need to make a concerted effort to assimilate the enormous changes in the field and take steps to regenerate themselves. Undoubtedly, TS and translator-training research benefitted from the shift from more formal translation theories to theories which contemplated the function, the receiver and sociocultural aspects of the process (Newmark 1981, 1988; Nida 1964). TS also owes a huge debt of gratitude to the skopos theory proposed by Hans J. Vermeer (1978) in the 1970s, widening the mainly linguistic source text focus held until then to include the idea of the importance of the target text and its purpose, as stated in the translation commission, which led to the functionalist translation theory that matured in Germany – first developed by Reiss and Vermeer (1984/2013), Reiss (1971/2004) and Nord (1991, 1997/2018), and later adopted and expanded worldwide. For the first time, the source text became an offer of information to be communicated in the best possible way to the receiver through the skopos, or purpose of the translation commission, which would to a large extent determine the translator’s decisions and strategies. The shifting paradigms are traced by Gambier (2016), who points out that, while still to be found in TS research, the linguistic, word-for-word or equivalence paradigm has been shaken firstly by the paradigm of the cultural turn (Bassnett 1980, Gambier 2016) and secondly by the shift from the written word and texts to the digital, multimodal environment of translating today. Gambier considers that this ‘clash of paradigms’ (Gambier 2016: 889) can be seen in the traditional

188

The Bloomsbury Companion to Language Industry Studies

view of translation often held by clients and the view of translation as ‘a process of recontextualization, as a purposeful action’ (Gambier 2016: 890). By questioning the primacy of meaning in the source text, once considered invariable, and requiring the translator to juggle a complex mesh of relationships between the text (in whatever form it may be), the context surrounding the text, the commission and the multiple agents involved in the translation process, the perception of translation has been transformed. As considered above, the critical juncture reflected in the proliferation of new labels to describe and differentiate the new modes of translation and conceptual divergences (localization, transcreation, post-editing, fansubbing, selective translation, translator education, translator training, etc.) cloaks a much more far-reaching existential crisis. As Gambier (2016: 889) puts it, ‘The proliferation of terms designating the linguistic-cultural transformations for which the word translation would once have sufficed is indicative not only of a conceptual disruption but of the communication value being added to the nodes of a burgeoning global network.’ In other words, it is time to rethink what ‘translation’ means and represents for the profession and training. There is such a variety and vast amount of research that we will address only the most prominent areas in current research on translator training and those likely to develop in the future. A dominant dichotomy exists at present between product-oriented and process-oriented research. Both provide ample opportunities to apply findings to training. Product-based research studies translated texts resulting from the translation process and has benefitted from recent technological advances, which have revolutionized multilevel corpus querying and corpus architecture, allowing researchers to broaden their horizons from words and terms to word combinations and phraseology. The greatest advances, however, have been made in process-oriented research, which focuses on the process of how translation occurs, a vital element for translator training and trainee evaluation. In the 1980s, TAPs or think-aloud-protocol translation process research emerged as a new way of understanding how translators worked by getting them to verbalize their thoughts as they translated a text (Jääskeläinen 2002). Encouraging trainees to reflect on how they translate is vital in training, and TAPs provided a ready means to do so. After some fifteen years, however, research with TAPs became less common as translation process research (TPR) materialized as a descriptive, empirical, experimental approach to TS as a result of the technology (including screen recording, key-logging and eye-tracking solutions), which now permits the observation of translational (micro)behaviour. TPR, and other areas of cognitive TS, are advancing at breakneck speed.

Training and Pedagogical Implications

189

When applied to translator training, however, the enormous amount of data produced from this research has not always been found useful in understanding why a translator pauses or returns to a particular segment of a text, as mental processes can only be inferred. Recent research is trying to overcome this difficulty by employing psychology, psycholinguistics and neuroscience to fully understand the translator’s mind during a translation task. Sun (2011) advocates a multi-method approach combining TAPs with other TPR methods such as keystroke logging and eye-tracking as a possible solution. An excellent example of combining approaches is presented by Göpferich (2008, 2009), who uses a multi-method approach to compare translations by novices and professionals in an attempt to develop a model of TC acquisition. Researchers have also created networks such as TREC19 (Translation, Research, Empiricism, Cognition) to further the visibility of current empirical, experimental research on cognition. New ways of framing the research and of analysing and interpreting the data are now emerging. Carl, Dragsted and Jakobsen (2011) have suggested a taxonomy of human translation (HT) styles which is critical for understanding individual trainee translation processes. Shreve, Angelone, Lacruz and Jääskeläinen20 have also used a combination of approaches to discuss cognition and cognitive effort in translation, editing and post-editing, constantly opening new avenues of research by incorporating psychology and expertise theory and questioning the well-established concept of TC models (see Angelone 2010; Jääskeläinen 2010; Shreve, Angelone and Lacruz 2018). Other researchers are exploring ergonomics, anthropology, work processes and psychophysiological aspects. Massey and Ehrensberger-Dow (2011), for example, have taken research outside the controlled classroom situation to the translators’ workplace. Initially prompted by an interest in improving training to match the requirements of professional practice, their research has also highlighted areas of interest for the industry in ‘optimizing performance without detrimental effects on motivation and translator autonomy’, ‘to test theoretical models of extended cognition and situated activity’ (Ehrensberger-Dow 2014: 379) or leading to later studies on ergonomics and health related issues in the workplace (Ehrensberger-Dow and Massey 2017). Translator trainers have been eager to introduce student-centred learning, Competence Based Training (CBT) (Hurtado Albir 2007), assessment (Huertas Barros and Vine 2016), task-based learning (González-Davies 2004; Hurtado Albir 2007), project-based learning (Kiraly 2005; Way 2009) and authenticity (Nord 1991, Kiraly 2000). A substantial amount of research exists to demonstrate this preoccupation in TS, where the AVANTI research

190

The Bloomsbury Companion to Language Industry Studies

group21 and the PACTE research group have been particularly active in not only researching the elements which constitute TC but also endeavouring to discover how different competences are acquired in order to improve training. Authors such as Kelly (2005, 2007), Göpferich (2009) and PACTE (Hurtado Albir 2017) have improved upon the original TC models, while insights into specific sub-competences have been researched by PACTE,22 by Gregorio Cano (2012) in terms of intercultural competence, by Huertas Barros (2011) with regard to interpersonal competence, by Way (2008, 2009, 2014a, 2016a, 2017) in respect of interpersonal, intercultural, psychophysiological competence and by Haro-Soler (2018) on psychophysiological competence. The primary motivation for this research is often to improve training, although unexpected results, such as the modified perception of the translation profession achieved in members of the legal profession (Way 2016c), also occur. Despite the fact that their use in the early stages of training is undeniably valuable (Way 2008), the TC models that have dominated research into translator training in recent years have, however, been considered flat or static by Kiraly (2015: 24). Recent pedagogical research has begun to question TC in its current form. This question has been addressed by Kiraly (2014, 2015) and Way (2014a, 2016a), who are concerned about the feasibility of introducing and evaluating all the sub-competences to reach expertise, particularly in later stages of training. Kiraly, from his grounding in education, has inspired trainers worldwide. Since his seminal work in 2000, he has constantly striven to push the boundaries of training to their limits. In recent publications (2014, 2015) Kiraly explains the transition from the traditional ‘transmissionist-instructionist praxis’ (2015: 9) that dominated translator training well into the 1990s (with some vestiges still found today) to the major changes accomplished since Kiraly (2000), which proposed a social constructivist approach to translator training. While his work on authenticity has been crucial, his more recent publications are again revolutionizing translation pedagogy (Kiraly and Hofmann 2016). For Kiraly, the next step forward is into complexity thinking and to progress from ‘enaction’ to ‘emergence’ in what he describes as a ‘postpositivist worldview’ (2015: 10), in line with Risku (2010). Faced with the task of preparing trainees for the complex process of translation problem solving in a ‘multi-dimensional context’ of conflicting requirements and norms and multiple agents in ‘authentic situations of interlingual, intercultural communication’ (Kiraly 2015: 11) while meeting client quality expectations, Kiraly (2015: 11) defends ‘project-based, learner-centred collaborative translation classes’ with authentic, simulated or real translation tasks. One such example can be seen in the use of intra-university

Training and Pedagogical Implications

191

projects combining translation trainees and law students through immersion, which places trainees in a context in which they are forced to draw upon all their competences simultaneously and systematically in a complex experience of meaningful learning (Way 2016c: 147–50). Another interesting avenue of research for the future, which may lead to a major shift in translator pedagogy, is the combination of TC and expertise studies (Shreve, Angelone and Lacruz 2018; Way 2014b). This is in line with Kiraly’s view that ‘cognitivism is the predominant theoretical framework for understanding human intelligence and its development’ (2015: 18, 26), although he also proposes broadening translator education research from the quantitative experimental research paradigm to include qualitative case studies (e.g. King 2016). More recent developments and research have sought to propose ways to solve the complexity23 of advanced translator training, which involves nurturing and monitoring the different sub-competence levels of trainees, who each have different learning styles and paces and ways of constructing knowledge (Way 2014a, 2016a), through text selection or scaffolding decision-making by difficulty. Likewise, emotions, self-efficacy and confidence (Haro-Soler 2018) and the effect they may have on translation task performance are attracting attention in research on translator training (Rojo and Ramos 2016), with clear indications that TC can improve if attention is paid to these aspects. One neglected area of research is training the trainers.24 We agree with Kiraly (2000: 6) when he states that there is a need to educate generations of educators who know how to do classroom research and how to design classroom environments that lead to professional competence (Way 2014b), but little attention has been paid to this vital aspect of translator education. Research into training translator trainers, and its possible applications, represent a vital avenue for the future. It has been suggested that incentivizing action research within translator education institutions could provide a viable low-threshold basis for systematic trainer (self-)reflection and (self-)training (Cravo and Neves, 2007; Way 2016b).

3.  Informing research through the industry In this section we will highlight some of the most pertinent examples of how the industry informs research and opportunities for the future to bridge the gap between research and the industry.

192

The Bloomsbury Companion to Language Industry Studies

The most significant contribution that industry could make to research is allowing access to those working there (translators, terminologists, revisers, etc.) and to the enormous amount of data which they process on market demands, language combinations, quality, translations, pricing or text types. Some cooperation does already exist, particularly with international organizations and, increasingly, with professional associations and businesses as outlined below. The United Nations (UN) and the European Union (EU) are the largest organizations which have sought to strengthen links with research, given the importance of translation to their daily functioning.25 Links to universities have always existed, but have been strengthened considerably in recent years, providing internships and other forms of cooperation. Also, the European Commission’s Directorate-General for Translation (DGT) runs programmes and outreach activities not only to assure its supply of translators but also to promote translation as a profession. Recent examples include the creation of the European Master’s in Translation (EMT)26 created after extensive work by the Expert Group27 based on research, or the Visiting Translator Scheme (VTS),28 offering DGT translator visits to strengthen student and staff awareness of institutional translation practices. Other initiatives include the ERASMUS network for Professional Translator Training (OPTIMALE), which held regular meetings, conferences and symposiums and published reports on diverse fields of translator training from 2010 to 2013, or the Transnational Placement Scheme for Translation Students (AGORA)29 and the European Association of Translation Companies’ (EUATC)30 agreements with HE institutions to create internships. Other directorates-general have also played a key role in bringing members of the translating and other professions together. One such case is the European Legal Interpreters and Translators Association (EULITA),31 founded in 2009 under the Criminal Justice Programme of the EU Commission’s Directorate-General of Freedom, Security and Justice with two further spin-offs: the Quality in Legal Translation project (QUALETRA)32 to promote the training of translators who specialize in criminal proceedings and to compare the practices in different member states in relation to translation and to information; and the Training for the Future (TRAFUT)33 project, which has brought translator trainers, translators and legal professionals together. Likewise, Interactive Terminology for Europe (IATE) has created the TERMCOORD project,34 which collaborates with universities on terminology projects and offers trainers and trainees study visits to the European Parliament.

Training and Pedagogical Implications

193

The Translating Europe Forum 2017, in its fourth edition, was dedicated to translator skills and employability, once again creating a space for all stakeholders to interact. The EU has also broadened its horizons and cooperates with the UN in the Pan-African Masters Consortium in Interpretation and Translation (PAMCIT)35 to promote the development of training courses in conference interpreting, translation and public-service interpreting in Africa in collaboration with African universities. Similar initiatives also exist in Asia and the United States. The UN outreach programme,36 launched in 2007 by the Department for General Assembly and Conference Management (DGACM), aims to work with universities and other institutions involved in the training of language professionals. Its main objective seems to be to prepare candidates for the UN by helping to match candidates’ skills more closely to UN requirements. Besides its unpaid internship programme and paid traineeships, it also provides staff to present seminars and workshops and remote coaching for students from partner universities. Universities also occasionally invite UN language staff to their examination panels and UN staff may also act as part-time university staff. This obviously benefits those closest to, or in partnership with, the UN. Besides PAMCIT, the International Annual Meeting on Language Arrangements, Documentation and Publications (IAMLADP) provides a forum and network for managers of over eighty international organizations employing conference and language service providers – mainly translators and interpreters. Their goal is to achieve efficiency, quality and cost-effectiveness in their organizations, and IAMLADP has a Universities Contact Group (UCG)37 to further develop cooperation between international organizations and training providers/universities. It has also created a task force, the International Annual Meeting on Computer-Assisted Translation and Terminology (JIAMCATT),38 which provides a forum for cooperation, debate and the interchange of expertise in the fields of translation, interpreting, documentation retrieval and computerassisted terminology. Furthermore, the universities that sign a memorandum of understanding (MoU) with the DGACM on cooperation in preparing candidates for the United Nations Language Competitive Examinations (MoU universities) have held joint conferences since 2011.39 When we turn to the industry beyond international organizations, information becomes more scattered and field dependent. The TAUS created in 2004 has been very active in connecting stakeholders, providing information, courses and tutorials for academia,40 and encouraging debate in conjunction with Proz.

194

The Bloomsbury Companion to Language Industry Studies

com, such as the one on ‘Does higher-education-courses-prepare-translatorssufficiently-for-life-in-industry?’41 or at the TAUS Industry Leaders Forum (TAUS 2018:17). Participants include academics, albeit in the minority. Proz. com also includes mentoring, CPD, courses and an education section on its web.42 Some criticism has been raised, however, at the sidelining of freelance translators in TAUS, which is less the case in Proz.com. Furthermore, more direct collaboration is evidently needed, such as that seen in O’Brien (2012), who, thanks to the access provided by TAUS, compared translation quality evaluation in eight TAUS member companies to three publicly available models – the LISA (Localisation Industry Standards Association) quality estimation (QE) model (v. 3.1), J2450 and EN15038 – in order to propose a dynamic quality evaluation model. Information provided, for example, by the Common Sense Advisory in The Language Services Market: 2017, an extensive survey of industry providers, or the World Bank Group’s Translation Unit’s (GSDTR) Translation Business Practices Report (2004) presents vital sources of data for trainers. Professional associations are active in CPD, organizing events and workshops, and have greater possibilities of reaching professional translators to gather information such as that provided in The UK Translator Survey (2017), conducted by the European Commission Representation in the UK, the Institute of Translation and Interpreting and the Chartered Institute of Linguists. Large companies such as Lionbridge and SDL are obviously businesses, so little information is freely available concerning in-house training. However, Lionbridge has sponsored the Chico Localization Program at California State University and awarded two scholarships to students, while SDL, for example, has some free training available online and will give informative seminars at universities. Smaller companies often also expand into training either purely out of interest in providing CPD for graduates or as another form of income. Two examples are the Teneo Linguistics Company in the United States, which recently launched a training web both for in-house translators and the public,43 and the Trágora Agency in Spain, a spin-off of graduates from the University of Granada, which has a training branch, Trágora Formación,44 offering free and paid content and mentoring. In both cases they have connected with university trainers and have participated in seminars and conferences for trainees. Inevitably, funding plays a major role in the cooperation with international institutions, while access to the workplace and big language data in the industry, along with collaboration with smaller businesses, would boost the scope of research tremendously and consequently impact on industry.

Training and Pedagogical Implications

195

4.  Informing the industry through research Despite the plethora of research available, much of it does not reach the industry and, even in academia, recent reports45 suggest that articles are rarely read by more than ten readers on average. Criticism is also often raised about the small samples of translators used in research experiments or that, more often than not, translator trainees are the subjects, thereby limiting the validity of any results (Orozco Jutorán 2001). Kiraly (2015: 26), on the other hand, has suggested that qualitative rather than experimental quantitative research may be more useful in a postpositivist research approach. However, the industry can, and has been, informed by research, with recent collaboration on the increase. Among the most prominent fields at the moment are quality and revision, testing/accreditation/ certification, work processes and ergonomics. In the search for improved translation quality, process-oriented research has come to the fore. Cognitive research and research in the workplace on effort and post-editing have clear implications for the industry in terms of efficiency and quality in a real translation environment, such as Lacruz and Shreve (2014), Lacruz (2017a,b) or Läubli et al. (2013). Revision plays an important role in quality and is another field that has informed the industry. The EU DGT46 Translation Quality Guidelines include references to Mossop (2001) and Parra Galiano (2007) in the DGT 2010 Revision Manual47 for Spanish. Brian Mossop has given revision courses at the UN and Silvia Parra Galiano acts as an external consultant for the DGT. The question of testing and certification/accreditation has attracted much interest as the profession has broached the thorny question of evaluation. Several researchers have approached this topic, often in conjunction with professional associations. Testing for international organizations has been under-researched until recently. In this case Anne Lafeber,48 at the time a reviser at the UN in Geneva, wrote her PhD49 on translation in inter-governmental organizations with reference to the skills and knowledge they require and how this may affect recruitment testing. Since then, she has gone on working in this field both at the UN (in internal, restricted documents) and in her publications (Lafeber 2012, 2013). Accreditation and certification have also come under scrutiny. In Australia, Sandra Hale was commissioned to lead the NAATI Testing (INT) Project50 and provide a report51 on their testing system, while Hayley King has recently presented her PhD (2016) comparing training for the NAATI exam in Australia to training for a degree in translation in Spain,52 using qualitative case

196

The Bloomsbury Companion to Language Industry Studies

studies, as suggested by Kiraly (2015). In the United States, Arango-Keeth and Koby (2003) have raised the question of matching translator training assessment and the industry’s quality assessment. This has led to further research by Koby and Baer (2005) on adapting ATA error marking to training, by Koby and Champe (2013) on professional certification, and by Koby and Melby (2013) on the validity of certification tests, culminating in the current research project Analysis of Errors in Exams from the American Translators Association led by Isabel Lacruz53 at Kent State University to examine criteria and consistency in testing. The PACTE group have undertaken the arduous task of providing a common framework for translation skills in the project Establishing Competence Levels in the Acquisition of Translation Competence in Written Translation (NACT)54 In collaboration with over twenty translator-training institutions, the DGT, EST and the EMT network, the project also includes input from both the public and private translation sectors, translator employers and professional associations. In line with the Common European Framework of Reference for Languages (CEFR),55 the results of this project will provide the training and professional sectors with a common framework which will have major repercussions for training and the industry, particularly when establishing professional and academic profiles or criteria for quality control, providing assessment criteria, and, in general, unifying the incongruous panorama that translators currently present. At the translator’s workplace, Massey and Ehrensberger-Dow (2010) have recently broken new ground with their research on demands on language professionals with studies on technical and instrumental competence (2011) and on ergonomics and possible health issues (Ehrensberger-Dow and Massey 2014; Meidert et al. 2016). Besides informing training, their findings provide insights into how to improve processes in the workplace, which also inevitably affect training as it aligns with industry requirements.

5.  Concluding remarks The current climate for translator training and research in pedagogy is exhilarating. Today’s global, multicultural, multilingual and technological information society requires the immediate, constant dissemination of information, and translators have the skills needed to find, select and use information to produce multimodal texts for different languages and cultures with the latest technology.

Training and Pedagogical Implications

197

Nevertheless, the gradual emergence of new professional profiles, the growing need for CPD and the multitude of employment possibilities involves reshuffling the cards that we hold to face the impending challenges. Whether in translator training or translator education, the exchange of training experience across HE institutions and, more specifically, the dissemination of information on including professional practices in the classroom will facilitate bridging the gaps between graduate training and industry requirements. The dynamic research underway in diverse areas of TS corroborates the determination of TS scholars to reach this goal. Furthermore, greater access to information on the weaknesses the industry detects in graduates would, undoubtedly, enhance translator training. More information on industry needs, such as information on the minimum requirements and standards expected by the industry for novice translators, would be an excellent first step. Although increased networking between the industry and trainers is emerging, a more structured approach and clearer picture of industry requirements can only improve training and ultimately benefit the industry. Translator training must also consider courses combining translation with other fields (law, technology, etc.) to provide graduates with the skills and knowledge to meet market demands and the requirements of new professional profiles. Beyond doubt, a crucial role will be played by providing best-practice CPD for trainers. In conclusion, many training programmes endeavour to provide graduates with the skills required by the industry, despite the numerous external constraints limiting their freedom to do so. The samples above of the vast amount of research underway, and the examples of initiatives to find focal points where academia and the industry can intersect, are barely the tip of the iceberg of the future of the dynamic language industry. Besides contributing to the industry, these efforts to network with all the players involved will, undoubtedly, provide closer cooperation and improved training in the future.

Notes 1 In line with the chapter title, we will use ‘translator training’ throughout, except when we discuss the academic/vocational dichotomy. 2 For an overview of changes in education, see https://www.britannica.com/topic/ education/Education-in-the-20th-century (accessed 18 July 2019).

198

The Bloomsbury Companion to Language Industry Studies

3 Available at http://www.est-translationstudies.org/resources/tti/tti.htm. 4 For a review of the situation in Yemen, see http://eprints.usm.my/31788/1/Eman_ Mohammed_Mohammed_Ahmed_Barakat.pdf. 5 See http://ec.europa.eu/education/policy/higher-education/bologna-process_en. 6 See the Dublin Descriptors describing the expected outcomes after completing a curriculum cycle: http://www.promeng.eu/downloads/training-materials/dublindescriptors/3%20cycle%20descriptors.pdf (accessed 18 July 2019). 7 http://www.aic.lv/bolona/Bologna/Bergen_conf/Reports/EQFreport.pdf (accessed 18 July 2019). 8 See https://ec.europa.eu/programmes/horizon2020/en/area/funding-researchers. 9 See http://uil.unesco.org/fileadmin/keydocuments/LifelongLearning/en/china2010-abstract-lll-strategy.pdf. 10 Pym, A. Lancaster University on 13 July 2017 https://www.youtube.com/ watch?v=JXkwrcTbJ5g. 11 See http://www.literarytranslators.org/awards/mentorships. 12 The European Graduate Placement Scheme http://www.e-gps.org/ (accessed 18 July 2019). 13 See http://grupsderecerca.uab.cat/pacte/en. 14 https://ec.europa.eu/info/resources-partners/european-masters-translation-emt_en. 15 The authors offer a much more detailed analysis comparing the American Translators Association (ATA) and EMT competences available online at http:// www.ttt.org/trans-int/competence.htm (accessed 18 July 2019). 16 See, for example, https://www.naati.com.au/ or the American Translators Association at http://www.atanet.org/certification/aboutcert_overview.php#1. 17 Hague and Melby presented a paper at the FIT 2017 conference in Brisbane entitled ‘A new opportunity for which translators are best prepared: Language services advisement’. 18 Tuning Project http://www.unideusto.org/tuningeu/. 19 TREC at http://pagines.uab.cat/trec/. 20 See Lacruz and Jääskeläinen (2018). 21 See http://www.ugr.es/~avanti/. 22 See http://grupsderecerca.uab.cat/pacte/en/content/publications (accessed 18 July 2019). 23 For an overview of complexity theory, see Marais (2014). 24 See http://www.ressources.univ-rennes2.fr/service-relations-internationales/ optimale/attachments/article/50/130409%20Translator%20Trainer%20 Competences.pdf (accessed 18 July 2019). 25 The Directorate-General Translation (DGT) is the largest translation service in the world, employing nearly 2000 linguists and producing some 1.5 million pages a year.

Training and Pedagogical Implications

199

26 EMT https://ec.europa.eu/info/education-be-deleted/european-masterstranslation-emt_en programmes/emt/key_documents/emt_competences_ translators_en.pdf. 27 The expert group included Daniel Gouadec, Federico Federici, Nike K. Pokorn, Yves Gambier, Kaisa Koskinen, Outi Paloposki, Dorothy Kelly, Michaela Wolf, Alison Beeby, Dorothy Kenny and members of the EMT board. 28 See https://ec.europa.eu/info/departments/translation/visiting-translator-schemevts_en (accessed 18 July 2019). 29 https://www.euatc.org/index.php/universities-internships/item/292-agora (accessed 18 July 2019). 30 https://www.euatc.org/universities-internships (accessed 18 July 2019). 31 http://www.eulita.eu/-. 32 http://www.eulita.eu/qualetra-0-. 33 EULITA, the European Legal Interpreters and Translators Association, and Lessius University College Antwerp were awarded EU funding under the EU Criminal Justice Programme for the TRAFUT (Training for the Future) project (JUST/JPEN/ AG/1549) to assist in and contribute to the implementation of the EU Directive 2010/64/EU of 20 October 2010 on the right to interpretation and translation in criminal proceedings. 34 http://termcoord.eu/. 35 http://ec.europa.eu/dgs/scic/international-cooperation/interpreting-for-africa/ index_en.htm. 36 https://languagecareers.un.org/dgacm/Langs.nsf/page.xsp?key=Outreach-. 37 https://www.iamladp.org/content/universities-contact-group. 38 https://www.iamladp.org/content/jiamcatt. 39 https://languagecareers.un.org/dgacm/Langs.nsf/files/report_of_the_third_mou_ conference _31_may_2013_rev.compressed/$FILE/report_of_the_third_mou_ conference_31_may_2013_ rev. compressed.pdf. 40 https://es.taus.net/academy. 41 https://www.taus.net/events/user-calls/higher-education-courses-preparetranslators-sufficiently-for-life-in-industry; for a report on this debate, see https://www.researchgate.net/publication/319276135_TAUS-Enabling_better_ translation. 42 http://www.proz.com/about/overview/education. 43 https://www.ilpweb.com/. 44 https://www.tragoraformacion.com/hangoutstragora/. 45 https://www.straitstimes.com/opinion/prof-no-one-is-reading-you (accessed 18 July 2019). 46 http://ec.europa.eu/translation/maltese/guidelines/documents/dgt_translation_ quality_guidelines_en.pdf.

200

The Bloomsbury Companion to Language Industry Studies

47 http://ec.europa.eu/translation/spanish/guidelines/documents/revision_manual_ es.pdf. 48 Currently senior reviser, English translation service at the UN in New York, Adviser on testing and LCE coordinator, documentation division, IAMLADP deputy secretary, Department for General Assembly and Conference Management. 49 Anne Lafeber’s PhD thesis (2012) http://www.intercultural.urv.cat/research/lafeber/. My thanks to Anne Lafeber for her input. 50 https://www.naati.com.au/projects/improvements-to-naati-testing-int/. 51 https://www.naati.com.au/media/1062/intfinalreport.pdf. 52 Hayley King’s Ph.D thesis (2016) http://researchbank.rmit.edu.au/view/rmit:162101. 53 Researchers Anne Neveu (Kent State University) and María del Mar Haro-Soler (Universidad de Granada). 54 NACT: http://grupsderecerca.uab.cat/pacte/en/content/ongoing-projects. 55 https://www.coe.int/en/web/common-european-framework-reference-languages/--.

References Angelone, E. (2010), ‘Uncertainty, Uncertainty Management and Metacognitive Problem Solving in the Translation Task’, in G. M. Shreve and E. Angelone (eds), Translation and Cognition, 17–40, Amsterdam: John Benjamins. Arango-Keeth, F. and G. Koby (2003), ‘Assessing Assessment. Translation Training Evaluation and the Needs of Industry Quality Assessment’, in B. Baer and G. Koby (eds), Beyond the Ivory Tower: Rethinking Translation Pedagogy, 117–34, Amsterdam: John Benjamins. Astley, H. and O. Torres Hostench (2017), ‘The European Graduate Placement Scheme: An Integrated Approach to Preparing Master’s in Translation Graduates for Employment’, The Interpreter and Translator Trainer, 11 (2–3): 204–22. Bassnett, S. (1980), Translation Studies, London: Methuen. Bernardini, S. (2004), ‘The Theory behind Practice: Translator Training or Translation Education?’, in K. Malmkjaer (ed.), Translation in Undergraduate Degree Programmes, 17–30, Amsterdam: John Benjamins. Biel, Ł. (2011), ‘Professional Realism in the Legal Translation Classroom: Translation Competence and Translator Competence’, Meta, 56 (1): 162–78. Bolaños-Medina, A. (2016), ‘Translation Psychology within the Framework of Translator Studies: New Research Perspectives’ in C. Martín de León and V. González-Ruíz (eds), From the Lab to the Classroom and Back Again. Perspectives on Translation and Interpreting Training, 59–100, Frankfurt: Peter Lang. Brunette, L. (2000), ‘Towards a Terminology for Translation Quality Assessment’, The Translator, 6 (2): 169–82.

Training and Pedagogical Implications

201

Calvo Encinas, E. (2010), ‘Análisis curricular de los Estudios de Traducción e Interpretación en España: Perspectiva del estudiantado’. PhD diss., Granada: Universidad de Granada. Available online: http://hdl.handle.net/10481/3488 http://digibug.ugr.es/handle/10481/3488#.V07DcTWLTIU (accessed 18 July 2019). Calvo Encinas, E. (2011), ‘Translation and/or Translator Skills as Organising Principles for Curriculum Development Practice’, Journal of Specialised Translation, 16: 5–25. Available online: http://www.jostrans.org/issue16/art_calvo.php (accessed 18 July 2019). Carl, M., B. Dragsted and A. L. Jakobsen (2011), ‘A Taxonomy of Human Translation Styles’, Translation Journal, 16 (2). Available online: http://translationjournal.net/ journal/56taxonomy.htm (accessed 18 July 2019). Chouc, F. and E. Calvo Encinas (2011), ‘Embedding Employability in the Curriculum and Building Bridges between Academia and the Work-Place: A Critical Analysis of Two Approaches’, La linterna del traductor, 4. Available online: http://www. lalinternadeltraductor.org/n4/employability-curriculum.html (accessed 18 July 2019). Common Sense Advisory (2017). The Language Services Market: 2017 by D. A. DePalma, R. G. Stewart, A. Lommel and H. Pielmeier. Available online: http://www.commonsenseadvisory.com/AbstractView/tabid/74/ArticleID/39815/ Title/TheLanguageServicesMarket2017/Default.aspx (accessed 18 July 2019). Cravo A. and J. Neves (2007), ‘Action Research in Translation Studies’, Journal of Specialised Translation, 7: 92–107. Available online: http://www.jostrans.org/issue07/ art_cravo.pdf (accessed 18 July 2019). Ehrensberger-Dow M. (2014), ‘Challenges of Translation Process Research at the Workplace’, MonTI, (7): 355–83. Ehrensberger-Dow, M. and G. Massey (2014), ‘Cognitive Ergonomic Issues in Professional Translation’, in J. W. Schwieter and A. Ferreira (eds), The Development of Translation Competence: Theories and Methodologies from Psycholinguistics and Cognitive Science, 58–86, Newcastle: Cambridge Scholars Publishing. Ehrensberger-Dow M. and G Massey. (2017), ‘Socio-technical Issues in Professional Translation Practice’, Translation Spaces, 6 (1): 104–21. EMT Expert Group (2009), ‘Competences for Professional Translators, Experts in Multilingual and Multimedia Communication’. Available online: http://ec.europa. eu/dgs/translation/programmes/emt/ key_documents/emt_competences_ translators_en.pdf (accessed 18 July 2019). Gambier, Y. (2016), ‘Rapid and Radical Changes in Translation and Translation Studies’, International Journal of Communication, 10: 887–906. Available online: http://ijoc.org/index.php/ijoc/article/view/3824 (accessed 18 July 2019). González-Davies, M. (2004), Multiple Voices in the Translation Classroom, Amsterdam: John Benjamins.

202

The Bloomsbury Companion to Language Industry Studies

González-Davies, M. and V. Enríquez-Raído (2016), ‘Situated Learning in Translator and Interpreter Training: Bridging Research and Good Practice’, The Interpreter and Translator Trainer, 10 (1): 1–11. Göpferich, S. (2008), Translationsprozessforschung: Stand-Methoden-Perspektiven, Tübingen: Gunther Narr. Göpferich, S. (2009), ‘Towards a Model of Translational Competence and its Acquisition: The Longitudinal Study TransComp’, in S. Göpferich, A. L. Jakobsen and I. M. Mees (eds), Behind the Mind. Methods, Models and Results in Translation Process Research, 11–37, Copenhagen: Samfundslitteratur. Gregorio Cano, A. (2012), ‘Becoming a Translator: The Development of Cultural and Intercultural Competence in Spain’, Cultus, 5: 154–71. Hague, D., A. Melby and W. Zheng (2011), ‘Surveying Translation Quality Assessment’, The Interpreter and Translator Trainer, 5 (2): 243–67. Haro-Soler, M. M. (2018), ‘Self-Confidence and its Role in Translator Training: The Students’ Perspective’, in I. Lacruz and R. Jääskeläinen (eds), Innovation and Expansion in Translation Process Research, Volume XVIII of the American Translators Association Scholarly Monograph series, 131–60, Amsterdam: John Benjamins. Huertas Barros, E. (2011), ‘Collaborative Learning in the Translation Classroom: Preliminary Survey Results’, Journal of Specialised Translation, 16: 42–60. Huertas Barros, E. and J. Vine (2016), ‘Translator Trainers’ Perceptions of Assessment: An Empirical Study’, in M. Thelen, G. W. van Egdom, D. Verbeeck, L. Bogucki and B. Lewandowska-Tomaszczyk (eds), Translation and Meaning, New Series, Vol. 41, 29–39, Frankfurt am Main: Peter Lang. Hurtado Albir, A. (2007), ‘Competence-based Curriculum Design for Training Translators’, The Interpreter and Translator Trainer, 1 (2): 163–95. Hurtado Albir, A., ed. (2017), Researching Translation Competence by PACTE Group, Amsterdam: John Benjamins. Jääskeläinen, R. (2002), ‘Think-aloud Protocol Studies into Translation. An Annotated Bibliography’, Target, 14 (1): 107–36. Jääskeläinen, R. (2010), ‘Are all Professionals Experts? Definitions of Expertise and Reinterpretation of Research Evidence in Process Studies’, in G. Shreve and E. Angelone (eds), Translation and Cognition: Recent Developments, 213–27, Amsterdam: John Benjamins. Jääskeläinen, R. and I. Lacruz (2018), ‘Translation – Cognition – Affect – and Beyond: Reflections on an Expanding Field of Research’, in I. Lacruz and R. Jääskeläinen (eds), Innovation and Expansion in Translation Process Research, 1–16, Volume XVIII of the American Translators Association Scholarly Monograph series, Amsterdam: John Benjamins. Kearns, J. (2006), ‘Curriculum Renewal in Translator Training: Vocational Challenges in Academic Environments with Reference to Needs and Situation Analysis and Skills Transferability from the Contemporary Experience of Polish Translator training Culture’, PhD diss., Dublin City University, Dublin.

Training and Pedagogical Implications

203

Kearns, J. (ed.) (2008). Translator and Interpreter Training: Issues, Methods and Debates. London: Continuum. Kelly, D. (2005), A Handbook for Translator Trainers, Manchester: St. Jerome. Kelly, D. (2007), ‘Translator Competence Contextualized. Translator Training in the Framework of Higher Education Reform: In Search of Alignment in Curricular Design’, in D. Kenny and K. Ryou (eds), Across Boundaries. International Perspectives on Translation Studies, 128–42, Newcastle: Cambridge Scholars Publishing. Kelly, D. (2008), ‘Training the Trainers: Towards a Description of Translator Trainer Competence and Training Needs Analysis’, TTR: traduction, terminologie, rédaction, 21 (1): 99–125. Kelly, D. (2017), ‘Translator Education in Higher Education Today: The EHEA and Other Major Trends. Convergence, Divergence, Paradoxes and Tensions’, in Translation/Interpreting Teaching and the Bologna Process: Pathways between Unity and Diversity, S. Hagemann, J. Neu and S. Walter (eds), 29–50, Berlin: Frank & Timme. King, H. (2016), ‘Translator Education in Context: Learning Methodologies, Collaboration, Employability, and Systems of Assessment’, PhD diss., Melbourne: RMIT University. Available online: http://researchbank.rmit.edu.au/view/ rmit:162101 (accessed 18 July 2019). Kiraly, D. (1995), Pathways to Translation. Pedagogy and Process, Kent, OH: Kent State University Press. Kiraly, D. (2000), A Social Constructivist Approach to Translator Education. Empowerment from Theory to Practice, Manchester: St. Jerome. Kiraly, D. (2005), ‘Project-based Learning: A Case for Situated Translation’, Meta, 50 (4): 1098–111. Kiraly, D. (2014), ‘From Assumptions about Knowing and Learning to Praxis in Translator Education’, inTRAlinea (Special Issue: Challenges in Translation Pedagogy): 1–11. Available online: http://www.intralinea.org/specials/article/ from_assumptions_about_knowing_and_learning_to_praxis (accessed 18 July 2019). Kiraly, D. (2015), ‘Occasioning Translator Competence: Moving Beyond Social Constructivism Toward a Postmodern Alternative to Instructionism’, Translation and Interpreting Studies, 10 (1): 8–32. Kiraly, D. and S. Hofmann (2016), ‘Towards a Postpositivist Curriculum Development Model for Translator Education’, in D. Kiraly, Towards Authentic Experiential Learning in Translator Education, 67–87, Göttingen: V&R unipress. Kiraly, D. and M. Piotrowska (2014), ‘Towards an Emergent Curriculum Development Model for the European Graduate Placement Scheme’, in Paper Presented at the International Conference the Future of Education, 4th Edition, Florence, 12–13 June. Available online: http://conference.pixel-online.net/FOE/files/foe/ed0004/FP/0366SET281-FP-FOE4.pdf (accessed 18 July 2019).

204

The Bloomsbury Companion to Language Industry Studies

Koby, G. S. and B. J. Baer (2005), ‘From Professional Certification to the Translator Training Classroom: Adapting the ATA Error Marking Scale’, Translation Watch Quarterly, 1 (1): 33–45. Koby, G. S. and G. G. Champe (2013), ‘Welcome to the Real World: Professional Level Translator Certification’, Translation and Interpreting 5 (1): 156–73. Koby, G. S. and A. K. Melby (2013), ‘Certification and Job Task Analysis (JTA): Establishing Validity of Translator Certification Examinations’, Translation and Interpreting, 5 (1): 174–210. Lacruz, I. (2017), ‘Cognitive Effort in Translation, Editing and Post-editing’, in J. Schwieter and A. Ferreira (eds), Handbook of Translation and Cognition, 386–401, Blackwell Handbooks in Linguistics, Malden, MA: John Wiley and Sons. Lacruz, I. (2018), ‘An Experimental Investigation of Stages of Processing in Post-editing’, in I. Lacruz and R. Jääskeläinen (eds), Innovation and Expansion in Translation Process Research, Volume XVIII of the American Translators Association Scholarly Monograph series, 217–40, Amsterdam: John Benjamins. Lacruz, I. and R. Jääskeläinen, eds (2018), Innovation and Expansion in Translation Process Research, Volume XVIII of the American Translators Association Scholarly Monograph series, Amsterdam: John Benjamins. Lacruz, I., and G. M. Shreve (2014), ‘Pauses and Cognitive Effort in Post-editing’, in S. O’Brien, L. Balling, M. Carl, M. Simard and L. Specia (eds), Post-editing: Processes, Technology and Applications, 246–72, Newcastle: Cambridge Scholars Publishing. Lafeber, A. (2012), ‘Translation: The Skill-set Required. Preliminary Findings of a Survey of Translators and Revisers Working at Inter-governmental Organizations’, Meta, 57 (1): 108–13. Lafeber, A. (2013), The Search for (the Right) Translators: Recruitment Testing at International Organizations, Saarbrucken: Lambert Academic Publishing. Läubli, S., M. Fishel, G. Massey, M. Ehrensberger-Dow and M. Volk (2013), ‘Assessing Post-Editing Efficiency in a Realistic Translation Environment’, in S. O’Brien, M. Simard and L. Specia (eds), Proceedings of MT Summit XIV Workshop on Postediting Technology and Practice, Nice, 2 September 2013, 83–91, Allschwil: European Association for Machine Translation. Marais, K. (2014), Translation Theory and Development Studies: A Complexity Theory Approach, Routledge Advances in Translation Studies, New York: Routledge. Massey, G. and M. Ehrensberger-Dow (2010), ‘Investigating Demands on Language Professionals’, Bulletin suisse de linguistique appliquée (Special issue), 2010 (1): 127–41. Massey, G. and M. Ehrensberger-Dow (2011), ‘Technical and Instrumental Competence in the Translator’s Workplace: Using Process Research to Identify Educational and Ergonomic Needs’, ILCEA Revue, 14. Available online: http://ilcea.revues.org/1060 (accessed 18 July 2019). Meidert U., S. Neumann, M. Ehrensberger-Dow and H. Becker (2016), ‘Physical Ergonomics at Translators’ Workplaces: Findings from Ergonomic Workplace Assessments and Interviews’, ILCEA 27. Available online: https://ilcea.revues. org/3996 (accessed 18 July 2019).

Training and Pedagogical Implications

205

Mossop, B. (2001), Revising and Editing for Translators, Manchester: St. Jerome. Mossop, B. ([2000] 2003), ‘What Should Be Taught at Translation School?’, in A. Pym, C. Fallada, J. R. Biau and J. Orenstein (eds), Innovation and E-Learning in Translator Training, 20–2, Tarragona: Intercultural Studies Group, Universitat Rovira i Virgili. Available online: http://www.intercultural.urv.cat/media/upload/domain_317/ arxius/Innovation/innovation_book.pdf (accessed 18 July 2019). Neubert, A. (2000), ‘Competence in Language, in Languages, and in Translation’, in C. Schäffner and B. Adab (eds), Developing Translation Competence, 3–18, Amsterdam: John Benjamins. Newmark, P. (1981), Approaches to Translation, Oxford: Pergamon. Newmark, P. (1988), A Textbook of Translation, New York: Prentice Hall. Nida, E. A. (1964), Toward a Science of Translating. With Special Reference to Principles and Procedures Involved in Bible Translating, Leiden: Brill. Nord, C. (1991), Text Analysis in Translation. Theory, Methodology, and Didactic Application of a Model for Translation-Oriented Text Analysis, Amsterdam: Rodopi. Nord, C. ([1997] 2018), Translating as a Purposeful Activity. Functionalist Approaches Explained, Manchester: St. Jerome. Obenaus, G. (1995), ‘The Legal Translator as Information Broker’, in M. Morris (ed.), Translation and the Law, 247–59, Amsterdam: John Benjamins. O’Brien S. (2012), ‘Towards a Dynamic Quality Evaluation Model for Translation’, Jostrans, 17: 55–77. Available online: http://jostrans.org/issue17/art_obrien.php (accessed 18 July 2019). Orlando, M. (2012), ‘Training of Professional Translators in Australia: ProcessOriented and Product-Oriented Evaluation Approaches’, in S. Hubscher-Davidson and M. Borodo (eds), Global Trends in Translator and Interpreter Training, 197–216, London: Continuum. Orozco Jutorán, M. (2001), ‘Métodos de investigación en traducción escrita: ¿qué nos ofrece el método científico?’, Sendebar, 12: 95–115. Parra Galiano, S. (2007), ‘Propuesta metodológica para la revisión de traducciones: principios generales y parámetros’, TRANS. Revista de Traductología, 11: 197–214. Piotrowska, M. and S. Tyupa (2014), ‘Translation Pedagogy – a New Sub-Discipline of Translation Studies’ inTRAlinea Special Issue: Challenges in Translation Pedagogy. Available online: http://www.intralinea.org/specials/article/2112 (accessed 18 July 2019). Pym, A. (2003), ‘Redefining Translation Competence in an Electronic Age: In Defence of a Minimalist Approach’. Meta, 48 (4): 481–97. Pym, A. (2009), ‘Translator Training’. Available online: http://usuaris.tinet.cat/apym/ on-line/training/2009_translator_training.pdf (accessed 18 July 2019). Reiss, K. ([1971] 2004), ‘Type, Kind and Individuality of Text: Decision Making in Translation’, in L. Venuti (ed.), 168–79, The Translation Studies Reader, 2nd edn, New York: Routledge. Reiss, K. and H. J. Vermeer (1984/2013), Towards a General Theory of Translational Action: Skopos Theory Explained, trans C. Nord, Manchester: St. Jerome.

206

The Bloomsbury Companion to Language Industry Studies

Risku, H. (2010), ‘A Cognitive Scientific View on Technical Communication and Translation. Do Embodiment and Situatedness Really Make a Difference?’, Target, 22 (1): 94–111. Rojo López, A. R. and M. Ramos Caro (2016), ‘Can Emotion Stir Translation Skill?’, in R. Muñoz Martín (ed.), Reembedding Translation Process Research, 107–30, Amsterdam: John Benjamins. Saldanha, G. and S. O’Brien (2013), Research Methodologies in Translation Studies, Manchester: St. Jerome. Shreve, G. M., E. Angelone and I. Lacruz (2018), ‘Are Expertise and Translation Competence the Same? Psychological Reality and the Theoretical Status of Competence’, in I. Lacruz and R. Jääskeläinen (eds), Innovation and Expansion in Translation Process Research, Volume XVIII of the American Translators Association Scholarly Monograph series, 37–54, Amsterdam: John Benjamins. Sun, S. (2011), ‘Think-Aloud-Based Translation Process Research: Some Methodological Considerations’, Meta, 56 (4): 928–51. TAUS (2018), Keynotes Summer 2018. A Review of the TAUS Industry Leaders Forum, Amsterdam 2018. Available online: https://www.taus.net/think-tank/reports/eventreports/keynotes-summer-2018 (accessed 18 July 2019). The UK Translator Survey (2017), European Commission Representation in the UK, the Institute of Translation and Interpreting and the Chartered Institute of Linguists. Available online: http://www.iti.org.uk/news-media-industry-jobs/news/859-uktranslator-survey-2017 (accessed 18 July 2019). The World Bank Group’s Translation Unit (GSDTR) (2004), Translation Business Practices Report. Available online: http://siteresources.worldbank.org/ TRANSLATIONSERVICESEXT/Vendor/20247728/Report_BusinessPractices_2004. pdf (accessed 18 July 2019). Torres Simón, E. and A. Pym (2017), ‘European Masters in Translation. A Comparative Study’. Available online: http://usuaris.tinet.cat/apym/on-line/training/2016_EMT_ masters.pdf (accessed 18 July 2019). UNESCO (1998), World Declaration on Higher Education for the Twenty-First Century: Vision and Action and Framework for Priority Action for Change and Development in Higher Education. Available online: http://www.unesco.org/education/educprog/ wche/declaration_eng.htm (accessed 18 July 2019). Vermeer, H. (1978), ‘Ein Rahmen für eine allgemeine Translationstheorie’, Lebende Sprachen, 23 (3): 99–102. Vigier Moreno, F. J. (2011), ‘El nombramiento de Traductores-Intérpretes Jurados de inglés mediante acreditación académica’, The Interpreter and Translator Trainer, 5 (1): 241–2. Way, C. (2008), ‘Systematic Assessment of Translator Competence: In Search of Achilles’ Heel’, in J. Kearns (ed.), Translator and Interpreter Training: Issues, Methods and Debates, 88–103, London: Continuum.

Training and Pedagogical Implications

207

Way, C. (2009), ‘Bringing Professional Practices into Translation Classrooms’, in I. Kemble (ed.), The Changing Face of Translation, 131–42, Portsmouth: University of Portsmouth. Way, C. (2014a), ‘Structuring a Legal Translation Course: A Framework for Decisionmaking in Legal Translator Training’, in L. Cheng, K. Kui Sen and A. Wagner (eds), Ashgate Handbook of Legal Translation, 135–54, Farnham: Ashgate. Way C. (2014b), ‘Translator Competence and Beyond: New Challenges for Translator Training’, presented at the Second International Conference on Research into the Didactics of Translation DidTrad 2014. Available online: https://www.researchgate. net/publication/319261122 (accessed 18 July 2019). Way, C. (2016a), ‘The Challenges and Opportunities of Legal Translation and Translator Training in the 21st Century’, International Journal of Communication, 10: 1009–29. Available online: http://ijoc.org/index.php/ijoc/article/view/3580 (accessed 18 July 2019). Way, C. (2016b), ‘Burning Boats or Building Bridges? Integrating Action Research in Translator Training’, Plenary address at the DidTrad 2016 Conference (Universitat Autònoma de Barcelona), 6–7 July 2016. Available online: https://www.researchgate. net/publication/319442847_Burning_boats_or_building_bridges_Integrating_ action_research_in_translator_training (accessed 18 July 2019). Way, C (2016c), ‘Intra-University Projects as a Solution to the Simulated/Authentic Dilemma’, in D. Kiraly (eds), Authentic Experiential Learning in Translator Education, 147–60, Göttingen: V&R unipress. Way, C. (2017), ‘Teaching and Assessing Intercultural Competence: From Common Ground to Divergence’, in H. Stengers, A. Sepp and P. Humblé (eds), Transcultural Competence in Translation Pedagogy. Representation – Transformation. Translating across Cultures and Societies, Berlin, Vienna, Hamburg, London: LIT Verlag. World Economic Forum (2017), To Train Tomorrow’s Leaders, Universities Need to Teach Universal Skillsets. Available online: https://www.weforum.org/ agenda/2017/10/to-train-tomorrow-s-leaders-universities-need-again-to-teachuniversal-skillsets/ (accessed 18 July 2019).

208

10

Audiovisual translation Jorge Díaz-Cintas

1. Introduction Audiovisual translation (AVT) is an academic discipline and professional activity that involves the localization of audiovisual media content by means of different translation practices. Translating this type of material requires awareness of the coexistence of the acoustic and the visual communication channels through which verbal and non-verbal information is concurrently conveyed. In recent decades, the complex semiotic texture of audiovisual productions has been of great interest to scholars in translation studies, and the profession has greatly multiplied and diversified ever since the advent of digital technology in the last quarter of the twentieth century. For some, the audiovisual format has become the quintessential means of communication in the new millennium. Also known as screen translation, film translation, multimodal translation and multimedia translation, among other nomenclatures, the term ‘audiovisual translation’ has come to be the one with wider currency in academic exchanges thanks to its transparency. The texts involved in this type of specialized translation combine two complementary channels (audio and visual) and a series of meaning-making codes (language, gestures, paralinguistics, cinematic syntax, etc.), whose signs interact and build a semantic composite of a complex nature (Zabalbeascoa-Terrán 2001; Martínez-Sierra 2008). Used as an umbrella term, AVT subsumes a raft of translation practices that differ from each other on the nature of their linguistic output and the translational techniques on which they rely. Their inner differences notwithstanding, the common axis that underlies all these modes is the semiotic nature of the source and target texts involved in the AVT process. In addition to having to deal with the communicative complexities derived from the simultaneous delivery of aural and visual input, audiovisual translators have to learn how to cope with the technical constraints

210

The Bloomsbury Companion to Language Industry Studies

that characterize this translation activity and be familiar with the utilization of AVT-specific software that allows them to perform the technical tasks.

1.1.  Audiovisual translation practices In the main, there are two overarching approaches when dealing with the linguistic transfer in AVT: revoicing and timed text. Whereas the former consists in substituting the original dialogue soundtrack with a newly recorded or live soundtrack in the target language (TL) (Chaume-Varela 2006: 6), timed text works in a chiasmic manner by rendering the original dialogue exchanges as written text, usually superimposed onto the images, and placed at the bottom of the screen. Both revoicing and timed text can be used in their more traditional way, that is, to bridge linguistic barriers or to facilitate access to audiovisual productions for audiences with sensory impairments such as the deaf, the hard of hearing, the blind and the partially sighted. Revoicing is a hypernym that encompasses different AVT practices, in which the oral output of the original production also remains oral in the target text. Based on the various categorizations of revoicing practices that have been put forward by authors such as Luyken et al. (1991), Karamitroglou (2000) and, more recently, Chaume-Varela (2012), the five more prominent ones are discussed in these pages, namely voiceover, narration, dubbing, simultaneous interpreting and audio description. Voiceover (VO) consists in orally presenting the translation of the original speech over the still audible original voice. According to Díaz-Cintas and Orero (2010), the standard approach from a technical perspective is to allow the speaker to be heard in the foreign language for a few seconds, after which the volume of the soundtrack is dimmed, so that the original utterances can still be heard in the background, and the translation in the TL is then narrated. The translation typically concludes while the speaker continues talking for a few more seconds, so that the audience can clearly hear the foreign language once more. Closely associated with the translation of factual genres, such as documentaries and interviews, it is hailed by some authors (Franco, Matamala and Orero 2010: 26) as a transfer mode that faithfully respects the message of the original text – an assertion that is, of course, highly debatable. Narration is virtually identical to voiceover as regards the actual translation of the source language. The main difference resides in the fact that in the case of narration the original utterances are wiped out and replaced by a new soundtrack in which only the voice of the TL narrator can be heard. The ensuing translation is often roughly synched with the visuals.

Audiovisual Translation

211

Dubbing, also known as lip-sync and famously referred to as traduction totale by Cary (1960) because of its many linguistic challenges, implies the substitution of the dialogue track of an audiovisual production with another track containing the new exchanges in the TL (Chaume-Varela 2012), and is widely practised in countries like Brazil, China, France, Germany, Japan, Italy, Thailand, Turkey and Spain, among many others. A fictional world within a wider fictional world that is cinema, dubbing’s ultimate aim is to make viewers believe that the characters on screen share the same language as the viewer. To achieve this goal, three types of synchronization need to be respected: lip synchrony (lip-sync), isochrony and kinetic synchrony. Lip-sync ensures that the TL sounds fit into the mouth of the on-screen characters, particularly when they are shown in close-up. Isochrony takes care that the duration of the source and the target utterances coincide in such a way that the target lines can be comfortably fitted between the openings and closings of the character’s mouth. The third type, kinetic synchrony, seeks to guarantee that the translated dialogue does not contradict the performance of the actor and that the voices chosen for the new recording are not at odds with the personal attributes and the physical appearance of the on-screen characters. Interpreting, whether simultaneous or consecutive, is a practice currently restricted to the translation of live speeches and interviews that used to be reasonably common during screenings at film festivals, when the film prints arrived too late and there was not enough time to have them subtitled. Finally, audio description (AD), an access service for visually impaired audiences, can be defined as ‘a precise and succinct aural translation of the visual aspects of a live or filmed performance, exhibition or sporting event for the benefit of visually impaired and blind people. The description is interwoven into the silent intervals between dialogue, sound effect or commentary’ (Hyks 2005: 6). This additional narration describes any visual or audio information that will help an individual with a visual impairment to follow the plot of the story, such as the body language and facial expressions of the characters, the surrounding landscape, the source of certain sounds, the actions taking place on screen and the costumes worn by the actors. European and nation-specific legislation aimed at encouraging the provision of assistive services in order to enhance access to audiovisual media for people with sensory disabilities has allowed for the quantitative expansion of AD, especially in public-service broadcasting, but also on on DVDs, Blu-rays, in cinemas, theatres, museums and, more recently, on the internet. The second main approach to AVT consists in adding a written text to the original production, for which some players in the industry have started to

212

The Bloomsbury Companion to Language Industry Studies

use the general term ‘timed text’. These flitting chunks of text correspond to condensed, synchronized translations or transcriptions of the original verbal input in the source language. As a superordinate concept, timed text can be either interlingual or intralingual, and it subsumes the following related practices: subtitling, surtitling, subtitling for the deaf and the hard of hearing and respeaking. Interlingual subtitling can be defined as a rendition in writing of the translation into a TL of the original dialogue exchanges uttered by the different speakers, as well as of all other verbal information that is transmitted visually (letters, banners, inserts) or aurally (lyrics, voices off). In a nutshell, subtitles do not usually contain more than two lines, each of which can accommodate a maximum of some 35 to 42 characters, and are displayed horizontally at the bottom of the screen (Díaz-Cintas 2010). They appear in synchrony with the dialogue and the image and remain on screen for a minimum of one second (or 20 frames) and a maximum of 6 (or 7) seconds. The assumed reading speed of the target audience dictates the rate of presentation of the text, with 12 to 17 characters per second being standard ratios (4 to 5 in the case of languages like Chinese and Japanese). It is the preferred audiovisual translation mode in countries like Belgium, Croatia, Greece, the Netherlands, Portugal and the Scandinavian countries, among many others, and it has known an exponential growth in recent years inasmuch as DVDs and today’s video-on-demand services integrate multilingual subtitles to reach a wider audience throughout the world. Surtitling, also known as supertitling, is the translation or transcription of dialogue and lyrics in live opera, musical shows and theatre performances. The surtitles are projected onto a screen placed above the stage and/or displayed on a screen fixed in the seat in front of the audience member (Burton 2009: 59). In a similar way to subtitles, their aim is to convey the overall meaning of what is being enunciated or sung, while complying with time and space limitations. On occasions, they may add some clarifications, for example characters’ names, so that the audience finds it easier to follow the diegesis. The other major access service in AVT, subtitling for the deaf and the hard of hearing (SDH), also known as captioning, is a practice that consists of presenting on screen a written text accounting for the dialogue, music, sounds and noises contained in the soundtrack for the benefit of audiences with hearing impairments. In SDH, subtitlers thus transfer dialogue along with information about who is saying what, how is it being said (emphasis, tone, accents and use of foreign languages), and any other relevant features that can be heard and are

Audiovisual Translation

213

important for the understanding of the storyline (instrumental music, sound effects, environmental noise, etc.). The use of colours and labels to identify speakers, the displacement of the subtitles, and the description of paralinguistic features like ‘sighs’ or ‘coughs’ are some of the characteristics that define this type of subtitling. Live subtitling is the production of subtitles for live programmes or events, which can be achieved by several means. This type of subtitling can be both intralingual and interlingual and be real live, as in a sports programme, or semi-live, as in the news, where a script of the content is usually made available shortly before the broadcast. Traditionally, professionals used stenotype techniques and keyboards to transcribe or translate the original dialogue but these days respeaking, or speech-based live subtitling, is gaining ground in the industry. This latter approach makes full use of automatic speech recognition (ASR), whereby a respeaker listens to the original utterance and respeaks it, including punctuation marks, to a speech recognition software that then displays subtitles on the screen with the shortest possible delay (Romero-Fresco 2011: 1). Similarly to AD, both SDH and respeaking have spread widely in the last decades thanks to national and international legislation enforcements, such as the EU Audiovisual Media Service Directive (2010). In countries like the UK, where SDH has been on offer since the 1980s, the percentage of captioned programmes on traditional TV channels is very high, with the BBC subtitling 100 per cent of their productions. Recently, legislation has been passed in the form of the Digital Economy Act 2017, compelling video-on-demand (VOD) broadcasters to include subtitles, AD and signing on their programmes (WilkinsonJones 2017). The democratization of technology has acted as a fillip for the rise of amateur practices on the internet, giving birth to new translation activities primarily based on revoicing and subtitling. Of these, fansubbing, arguably the best-known manifestation of fan translation, is the subtitling of audiovisual productions, originally Japanese anime, done by fans for fans and nowadays normally distributed for free over the internet (Díaz-Cintas and Muñoz-Sánchez 2006). Fandubbing, on the other hand, implies the dubbing of a foreign-language film or television show into a TL, done by amateurs or enthusiasts rather than by professional actors (Baños 2019). The merits of the linguistic transfer and the technical dimension aside, both modes are in essence pretty similar to their professional counterparts. According to research conducted on behalf of the Media & Entertainment Services Alliance Europe, audiovisual media content localization across Europe,

214

The Bloomsbury Companion to Language Industry Studies

the Middle East and Africa is expected to increase from $2 billion in 2017 to over $2.5 billion before 2020 (Tribbey 2017). The explosion in channels and videoon-demand platforms, driven partly by so-called over-the-top (OTT) players, who specialize in the delivery of content over the internet, has opened up more opportunities for programme makers to sell their titles in new markets. With this fast-growing global demand for content that needs to be translated – not only high-profile new releases but also back-catalogue TV series and films for new audiences in regions where they have not been commercialized previously – a shortage of professional audiovisual translators has become one of the industry’s biggest challenges. Given the lack of formal AVT training in many countries, the situation is likely to get worse in the short term in the case of certain language combinations.

2.  Research focal points Having existed since the turn of the twentieth century as a professional activity, AVT remained practically unexplored by scholars until the early 1970s. However, the rapid technological changes experienced in the last decades and a growing interest in the communication potential unleashed by multimodal productions have raised the visibility and status of AVT, now heralded by many as a dynamic, vibrant and mature field of research. Coinciding with the celebration in 1995 of the centenary of the invention of cinema, the organization of a series of international events on the translation of media content could be symbolically considered the catalyst that triggered academia’s interest in these practices. Such pioneering actions sowed the seeds for the development of international bodies like the European Association for Studies in Screen Translation (ESIT) and the establishment of series of conferences that since then have taken place regularly, such as Languages & the Media and Media for All. Usually held in partnership with stakeholders from the industry, these initiatives have raised the social and academic visibility of AVT and enlightened applied research studies in the field. The publication of a substantial number of monographs and collective books on the topic, as well as the completion of doctoral theses and the launch of numerous undergraduate and postgraduate training courses, have also contributed to the development of this area. A special issue of Babel, edited by Caillé (1960), was the first one by a leading journal in translation studies to focus on AVT, albeit with a strong emphasis on dubbing to the detriment of other modes. Many special issues have since been published by other first-class scholarly journals,

Audiovisual Translation

215

such as The Translator (9:2, 2003), Meta (49:1, 2004), Cadernos de Tradução (2:16, 2005), The Journal of Specialised Translation (6, 2006), InTRAlinea (2006), Linguistica Antverpiensia New Series (6, 2007), Perspectives (21:4, 2013), MonTI: Monographs in Translation and Interpreting (4, 2012), TRANS: Revista de Traductología (17, 2013), Target (28:2, 2016) and Altre Modernità (2016). In view of this bourgeoning intellectual bustle, it can be safely argued that, despite the initial hoops, AVT has been able to progressively consolidate itself as an academic field worth of serious scholarship. In the current theoretical context where the traditional dichotomy between translation and interpreting is founded on the medial nature of the source and target texts, AVT seems to lie somewhere in-between, due to its multimodal nature. The situation is compounded by the fact that research has normally insisted on the perception of AVT as if it were a single, homogeneous and unifying activity, when in reality it is made up of a myriad of practices that can be very different from each other. Be that as it may, theorists and industry professionals who have reflected on AVT-related issues have managed to build a substantial body of AVT-specific literature that has helped demarcate the potential and the boundaries of the discipline. As recently discussed by Ramos Pinto and Gambier (2016), AVT specialists seem to have focused on five pivotal inquiry clusters, namely historyrelated foundations, descriptive studies of so-called AVT translation problems, translating process analysis, language policies and accessibility issues. Anxious to foreground the specificity of the field against other translation practices, many works have looked at AVT from a professional point of view, focusing on its mechanics and technical dimension (Pommier 1988; Luyken et al. 1991; Ivarsson and Carroll 1998) and on the semiotic and societal similarities/ differences between dubbing and subtitling (Koolstra, Peeters and Spinhof 2002; Pettit 2004). In this search for differentiation from other linguistic transfer activities, some debates have concentrated on linguistic idiosyncrasies (Tomaszkiewicz 1993; Pavesi 2005), the impact that the pre-fabricated orality of the original dialogue has in the TL (Guillot 2008; Baños-Piñero and Chaume-Varela 2009), the nature of the translation strategies most frequently implemented (Gottlieb 1992; Martí-Ferriol 2013), the specificities of translating for children (O’Connell 2003) and the case of redubbing (Zanotti 2015; Di Giovanni 2017), as well as the challenges presented by the transfer of wordplay (Schröter 2005), humour (Zabalbeascoa 1996; MartínezSierra 2008; De Rosa et al. 2014), cultural references (Ramière 2007; Pedersen 2011; Ranzato 2016), linguistic variation (Ellender 2015), multilingualism (de Higes Andino 2014) or swearing (Han and Wang 2014).

216

The Bloomsbury Companion to Language Industry Studies

If the search for specific features that could justify the autonomy of AVT as a distinct branch from other translation activities was one of the main propellers in the early investigations, it is widely acknowledged these days that the way forward has to be found in its interdisciplinarity and synergies with other branches of knowledge. For many years, scholars like Chaume-Varela (2004) and, more recently, Romero-Fresco (2013) have advocated closer interaction with film studies and filmmakers, and the works of De Marco (2012) have benefit from the theoretical apparatus borrowed from gender studies in order to shed light on how the language used in the translated dialogue lines affects or is affected by social constructs such as gender, class and race. Similarly, premises and conceptualizations from postcolonial studies have proved highly operative in disentangling the role played by multilingualism in diasporic films (Beseghi 2017). The heterogeneous nature of the audiovisual text rightly justifies the application of interdisciplinary methodologies and analytical approaches and it is in this expansion of interests and methods that academia and industry can initiate inchoate fruitful synergies. Establishing who informs whom in the process may not always be straightforward, as true collaboration should travel in both directions. In what follows, a deliberate attempt has been made to avoid references to research appertaining to the field of accessibility to the audiovisual media, as this topic is covered in another chapter of this volume. The main focus is thus on interlingual practices and only on a few occasions, when the contrast may prove illuminating, will mention be made of works and projects centred on access services.

3.  Informing research through the industry Sharp boundaries between theoretical assumptions and professional practices have traditionally existed in most academic disciplines. The tension between abstract and hands-on approaches is a recurrent issue in the relationship between the academic world and the industry, which is by no means unique to translation circles. Striking a happy balance between both is of paramount importance to safeguard the well-being of the discipline and the profession. In the particular case of AVT, research has been relatively anchored in professional practice and many publications and projects have (in)directly focused on the profession by studying the product, the workflows and the agents of translation. In this respect, it can be argued that, for many years, the industry has been informing research

Audiovisual Translation

217

rather than the other way around. In fact, many of the works in the field have been written by practitioners and academics with vast experience in their trade, who wanted to share their knowledge with other colleagues and the academic world at large (Laks 1957; Hesse-Quack 1969; Pommier 1988; Luyken et al. 1991; Ivarsson and Carroll 1998; Karamitroglou 1998; Díaz-Cintas and Remael 2007). Of a predominantly applied nature, their insider’s knowledge has helped cement the foundations of the discipline and their works have been widely used to inform training in the field and to spur additional studies. A downside of this state of affairs is the fact that, without further research, some of these guidelines and parameters risk being perpetuated through teaching and fossilized in professional practice, thus ignoring the possibility that changes may have occurred in the way in which viewers consume audiovisual productions and that some old conventions may have become obsolete. Contradictions about the implementation of certain standards within and across languages can be puzzling and costly for companies aiming for a global reach. It is here that reception studies instigated and conducted by academics can help assess whether viewers are satisfied with the current state of affairs and whether their needs and gratifications have evolved with time and, if so, how best to reflect these variations in real practice. The role played by technology has been crucial not only with regard to the way AVT practices have changed and evolved but also with regard to the manner in which research has responded to these changes. The difficulty of getting hold of the actual physical material, together with the tedium of having to transcribe the dialogue and translations, and having to wind and rewind the video tape containing them, may help partly justify the reluctance to conduct research on AVT in the early decades. However, the advent of digital technology and the subsequent arrival of the DVD in the mid-1990s can be hailed as an inflection point for the industry and academia (Kayahara 2005). The new distribution format acted as a research accelerator as it facilitated access to multiple language versions of the same production on the same copy and allowed, for instance, the easy extraction (i.e. ripping) of the subtitles and their timecodes. Conducting comparative analyses across languages, or getting hold of a seemingly infinite number of audiovisual programmes with their translations, had suddenly become a rather simple task. It is not surprising, therefore, that the number of scholarly publications started to grow exponentially around this time. The industry had come up with a novel way of distributing material that had not only captivated the audience but also revolutionized the way in which audiovisual translations were produced and consumed. Understandably, research interest

218

The Bloomsbury Companion to Language Industry Studies

focused on exploring and teasing out the main linguistic and technical characteristics of this ‘newly discovered’ field, that is, AVT with its many realizations, covering the wide range of topics already discussed in Section 1.1. This academic fascination with the study of the actual product as delivered by the industry, and its imbrication in the new hosting sociocultural environment, can also be traced in the various works that have centred on the impact that censorship and ideological manipulation have in the translation of audiovisual productions (Díaz-Cintas 2012; Mereu Keating 2016; Díaz-Cintas, Parini and Ranzato 2016). The fact that this audiovisual media boom coincided in time with the then upcoming descriptive translation studies paradigm postulated by Toury (1995) explains why a large number of projects carried out in the field subscribed to this theoretical framework. In the main, technical innovation, such as the development of specific AVT software or the more recent cloud-based platforms such as Ooona1 and Plint,2 has been mostly instigated by the industry, away from scholarly centres. Collaboration on this front has been traditionally minimal and scholars have been left with little margin for manoeuvre beyond the odd study assessing the functionality of the tools from a didactic perspective (Roales-Ruiz 2014). Notwithstanding the sensitivities surrounding commercial secrecy, a more advantageous relationship should be explored among the interested parties, whereby researchers could be granted wider access to the in-the-cloud platforms so that user experience tests can be conducted among practitioners and translators-to-be in exchange for advice on potential improvements. Such collaboration would also propitiate a better understanding of the functioning of these cloud-based project management systems and online subtitling/dubbing editors, which in turn would inform the training of future translators on up-todate technologies and workflows. One of the reasons for the lack of collaboration on the technical front is the fact that theorizing the impact of technology on activities like dubbing or subtitling can prove elusive and challenging, as few theoretical frameworks have been developed that could help scholars to conduct critical analyses. As discussed by O’Hagan (2016), one of the reasons behind this state of affairs is the fact that technological factors have never been duly considered a meaningful domain in mainstream translation theories, which have consequently failed to acknowledge the epistemological significance of technology in translational activity. To bolster current translation debates, particularly when dealing with multimodal products, more room has to be made to allow for interdisciplinary perspectives that will help scholars theorize the invasive role of technology. In

Audiovisual Translation

219

this respect, the works published by Ehrensberger-Dow and O’Brien (2015), Massey and Ehrensberger-Dow (2017) and Teixeira and O’Brien (2017) on cognitive and ergonomic aspects of computer workstations, the workplace and working environment, tools and resources, workflow and organization as well as health and related issues can be hailed as pioneering. A forerunner in the field of subtitling is the study by Beuchert (2017), in which she adopts a situated cognition approach to investigate subtitling processes focused not only on the subtitlers’ internal, cognitive translation processes but also on the external, contextual factors surrounding the subtitlers and their tasks, including the work environment and the role of technology. In its social role, scholarship also endeavours to improve and advance the teaching and learning of a particular discipline in order to prepare fully qualified professionals and guarantee its sustainability into the future. The strategic design of forward-looking curricula is imperative as translators-to-be should be able to predict potential changes in the profession as well as be prepared and equipped to adapt to them. To narrow the proverbial chasm between the industry and academia, the latter would benefit from learning about the current and future needs of the former, which in turn can help scholars to conduct self-reflective research on innovative and transformational changes to the curriculum on offer at their own institutions. Yet, limited research has been carried out in this area (Díaz-Cintas 2008; Cerezo-Merchán 2012). A closely related area that has attracted the attention of scholars is the exploration of the potential benefits that using subtitled versions or other AVT modes can have on foreign-language education (Incalcaterra Mcloughlin, Biscio and Ní Mhainnín 2011; Talaván-Zanón 2013; Gambier, Caimi and Mariotti 2015). Empirical in nature, the output of this research is mainly based on the results obtained from conducting experiments with participants learning a second language. Worthy of mention are the European-funded research projects LeVis3 and ClipFlair,4 both of which draw their inspiration from AVT activities and had, as part of their main objectives, the design and development of tools and educational material for foreign-language teaching and learning.

4.  Informing the industry through research Traditionally, a perceived preference towards theorization with no ready application to practice has fuelled the suspicions in some quarters of the translation industry as to the worth that some of these academic fruits represent

220

The Bloomsbury Companion to Language Industry Studies

for the profession. The situation may well be changing and, as claimed by Williamson (2016) in her exploration of the impact that academic research has on professional subtitling practitioners, the social relevance of research to practice has gained prominence of late as academics are increasingly required by governments and funding bodies to demonstrate the impact of their scholarly activity outside of the academy. Research is indeed a rewarding enterprise but, for it to be embraced by industry partners, it has to have an empirical foundation. Indulging in intellectual pursuits that are wilfully disconnected from the practical concerns of the profession, or that do not take into consideration the needs of the final users, runs the risk of being perceived as a fruitless, otiose adventure of little interest to the industry. As in many other walks of life, a happy medium has to lie in a balanced coupling of theory and practice and, in this respect, the opportunities for cross-fertilization in translation are enormous and the prospects very encouraging. In what follows, some research projects with a direct impact on enhancing the language industry are discussed. The assessment of quality in translation has always been a thorny issue because of its encompassing nature, affecting as it does all phases of the process, and due to its inherent subjectivity. The rise of the descriptive translation studies paradigm, as a reaction to a history of rather dogmatic approaches in our field, meant that prescriptive approximations, which tend to be preferred by the industry, were out of the question in scholarly circles. In AVT, little research was carried out on the topic before the start of the new millennium, when the situation experienced a radical turn and publications in interlingual translation as well as accessibility started to emerge. Some of these embraced new theoretical paradigms of proven currency in other disciplines, like action research and actor-network theory. Romero-Fresco’s (2011) proposed NER model to gauge the quality and accuracy rate in intralingual live subtitling and respeaking has inspired scholars like Pedersen (2017) to develop a similar model, FAR, for the assessment of quality in the more slippery context of interlingual subtitling. Kuo’s (2014) study on the theoretical and practical aspects of subtitling quality opens up the scope and paints a collective portrait of the subtitling scene by also canvassing subtitlers’ views and providing a glimpse of the demographic make-up of the professionals involved in this rapidly evolving industry. Research has also been carried out on other equally applied topics of interest to the industry, such as the use of templates in multilingual projects (Nikolić 2015), quality standards in dubbing (Chaume-Varela 2007), the adaptation of existing quality models to AVT (Chiaro 2008) and the dynamics of subtitling production chains within an

Audiovisual Translation

221

actor-network theoretical framework (Abdallah 2011). The benefits of this line of research, which can be very time-consuming and beyond the reach of most small- and medium-sized enterprises, are obvious and allow all stakeholders to gain a fuller picture of the internal and external parameters that affect the quality of the translated production. Another area of great interest to the industry has been the exploration of the application of technology, in particular machine translation, to increase productivity and cope with high volumes of work and pressing deadlines. The relative ease with which quality subtitle parallel data can be obtained has been the catalyst for the introduction of statistical machine translation (SMT) technology in subtitling. Under the auspices of the European Commission, projects like SUMAT (SUbtitling by MAchine Translation, 2011-2014) have focused on building large corpora of aligned subtitles in order to train SMT engines in various language pairs. Its ultimate objective is to automatically produce subtitles, followed by human post-editing, in order to increase the productivity of subtitle translation procedures and reduce costs and turnaround times while keeping a watchful eye on the quality of the translation results (Georgakopoulou and Bywood 2014). A similar project, conducted around the same time, was EU-BRIDGE,5 whose main goal was to test the potential of speech recognition for the automatic subtitling of videos. The works of Flanagan (2006) and Burchardt et al. (2016) on the quality of machine-translation output in AVT are also a testimony to the interest raised in this area. A vast amount of research to date on AVT has been largely based on argumentation and descriptivism, with the result that, in the main, empirical evidence accumulated has not been directed towards evaluating and appraising the prescriptive conventions applied in the profession. Perhaps surprisingly, given the experiential research tradition found in media studies, the views of the audience have also been conspicuously absent in many of the academic exchanges, which also tend to lack the practitioner’s perspective. Recently, however, a conscious move to go beyond descriptivism has brought a shift of focus from the analysis of the textual idiosyncrasies of the original to the exploration of the effects that the ensuing translation has on viewers. Researchers aiming to gain a deeper understanding of the audience’s behaviour and attitudes towards the consumption of translated audiovisual productions have started to appropriate theoretical frameworks from the social sciences in their own studies. An example of this type of work is that of Di Giovanni (2016), who, making use of the traditional questionnaire, reports on a series of studies carried out at major Italian film festivals with the aim to evaluate audience reception of

222

The Bloomsbury Companion to Language Industry Studies

subtitled films and their awareness of what contributes to, or jeopardizes, quality in subtitling. The interest in this type of approach has been unremitting. Over the last decade, the use of interdisciplinary methods for empirical research, and with them the recourse to specialized software such as eyetrackers and, to a much lesser extent, biometric sensors that monitor and record facial expressions, as well as the activity of the brain (EEG) and the heart (ECG), have opened up new and exciting avenues for better understanding the perception and reception of audiovisual texts in translation and, ultimately, for improving the quality of the end product. These biometric and imaging methods, in combination with more traditional approaches such as questionnaires, interviews and computerized tests, have the potential of yielding results of a much-applied nature that the industry can easily factor into their modus operandi. Innovation on this front has so far been primarily spearheaded by academic and commercial researchers working in the field of media accessibility, who are currently leading the way in user-based studies. As claimed by Di Giovanni (2016: 60), this may simply be ‘a natural tendency, as both the research and the practice of media access for the sensory impaired are deeply grounded in the knowledge and involvement of the end users’. Yet, the fact remains that this flurry of empirical experimentation taking place in SDH and AD sharply contrasts with the scarcity and narrow scope of the reception studies being conducted in the more traditional translation areas of dubbing and subtitling. The number of studies so far carried out is still very limited and they normally include a rather small number of participants. Because of their complexity and onerous nature, large-scale empirical experiments aimed at evaluating the reception of dubbing or interlingual subtitles are thin on the ground, even though their outcomes could prove very fruitful and could feed back straight into professional practices and processes. Deciding on a subtitle presentation rate that would satisfy most if not all viewers is clearly too utopian a goal, but we can get slightly closer to it by testing and garnering information provided by the audience themselves in an empirically objective rather than purely subjective manner. By far, subtitling has been the activity privileged by most scholars when it comes to experimenting with eye-tracking technology. The paper by Kruger, Szarkowska and Krejtz (2015) provides a comprehensive overview of eyetracking studies on subtitling and volunteers recommendations for future cognitive research in AVT. Likewise, the collective volume edited by Perego (2012) and the special issue of the journal Across Languages and Cultures (17:2, 2016) both provide a most informative account of some of the projects and

Audiovisual Translation

223

experiments being conducted on AVT reception, including accessibility. If eyetracking has proved so alluring, it is because it allows researchers to explore the physiological and cognitive dimensions of subtitle reading and to examine participants’ reactions of which they themselves may not be aware. Some of the topics explored to date focus on the effect of linguistic variation on the reception of subtitles (Moran 2012); the impact that shot changes have on viewers’ reading behaviour to test the unchallenged belief that they trigger the re-reading of subtitles (Krejtz, Szarkowska and Krejtz 2013); the role of poor line breaks in subtitle comprehension (Perego, Del Missier and Porta 2010); the influence of text editing (reduced vs. verbatim subtitles) and subtitle presentation rates on the viewers’ comprehension and reading patterns (Szarkowska et al. 2016); and the response of viewers to badly synchronized subtitles (Lång et al. 2013). Filizzola (2016), for her part, adopts a twofold methodology that combines the use of a survey questionnaire with eye-tracking technology to discover whether the import of British stand-up comedy productions with subtitles could be successful in a dubbing country like Italy. Because of their applied nature and their emphasis on testing conventions that are part of the daily routine of AVT translators, the results yielded by many of these experiments can inject new knowledge into the profession and have the potential of effecting actual change in the industry. Experimental approaches that rely on eye-trackers are also being used to probe the process of translation from the point of view of the primary agents, that is, the translators. A pioneering pilot study under this prism is the one conducted by Massey and Jud (2015), in which they explore the opportunities and challenges of supporting the product-oriented teaching of interlingual subtitling with screen recording and eye-tracking. More recently, Hvelplund (2018) has carried out an experiment focused on the process of translating for dubbing, during which professional translators and trainees were monitored while translating an excerpt of an animated television show. Finding out about the professionals’ and trainees’ distribution of attention and their cognitive effort during the translation activity can prove most valuable for updating and advancing educational training. Another area that has attracted scholarly attention in recent years is the activity of web-based communities of non-professional translators. Controversial practices like fansubbing (Díaz-Cintas and Muñoz-Sánchez 2006; Massidda 2015), crowdsubtitling and fandubbing (Wang and Zhang 2015) have been the subject of academic enquiry that tends to theorize them from a media studies perspective. Though, at first sight, such an approach may bear little fruit for the language industry, because of its epistemic predisposition, gaining a deeper

224

The Bloomsbury Companion to Language Industry Studies

understanding of their dynamics and the influence that these activities exert on the savvy netizens that inhabit today’s digital society can provide arresting insights into the viewing habits of this youthful, growing sector of the population. In a mercurial mediascape where the volunteering of translations and the free distribution of user-generated material are ingredients of the staple diet of daily communication, companies operating in the media industry are always on the lookout for new working paradigms that could give them the edge over their competitors. Studies like the one carried out by Caffrey (2010) problematize the use of so-called headnotes, that is, additional explanatory notes that usually appear on top of the screen and are anathema in professional practice but commonly used and appreciated by fansubbers. With the help of an eye-tracker, the author looks into the processing effort that the use of these pop-up glosses has on viewer perception and debates whether they could become a common feature in commercial subtitling. Also working with eye-tracking, Orrego-Carmona (2016) embarks on a study that explores the audience reception of subtitled TV series using professional and non-professional subtitling. More questionable from an ethical viewpoint, the European Commission, through their MEDIA programme, launched two €1 million preparatory actions, in 20156 and 2017,7 to research how crowdsourcing and other innovative solutions could reduce the costs of obtaining subtitles and increase the circulation of European works.

5.  Concluding remarks Any claimed hostility between academia and industry is a symptom of shortcomings in previous conceptualizations of the relevance of research to practice, deriving from a falsely dichotomous theory-versus-practice argument that can be difficult to justify in an area as applied as AVT. While there will always be studies of a more markedly abstract nature, investigations that ultimately inform the AVT profession and developments in the industry that are of interest to academics will continue to abound. This chapter attests to the healthy dynamics of existing and potential synergies between academia and the AVT industry. Though there is room for improvement, this is no small achievement, particularly when the relative youth of the scholarly discipline and the great strides that have been made in a very short time span are considered. From descriptive studies on the nature of the translated product and overviews of the professional environment and dynamics, to the more recent interest in the application of automation and

Audiovisual Translation

225

CAT tools, to the practice of AVT and the in-vogue investigations of audience reception, collaboration between academia and the industry has been gradually strengthened over the decades. In the early explorations, much progress was made in understanding the object of study. In more recent research, the focal interest has been steadily shifting to learn how viewers behave when watching audiovisual productions in translation, which is in turn bringing academia and industry closer together in the common aim of understanding the impact of AVT on the audience. The repercussions of these efforts have been beneficial for academia, practitioners, industry and pedagogy, and are starting to be so for the audience, too. And although it is true that these and other studies have helped consolidate the field of AVT research by injecting a considerable dose of interdisciplinarity, the fact remains that there is still ample space for further testing and experimenting in a discipline as richly complex and vibrant as AVT.

Notes 1 2 3 4

See http://ooona.net See http://www.undertext.se/plint Learning via Subtitling, 2006–2008, see http://levis.cti.gr Foreign Language Learning through Interactive Captioning and Revoicing of Clips, 2011–2014, see http://clipflair.net 5 2012–2014, see https://www.eu-bridge.eu 6 See https://tinyurl.com/y9gzfm9m 7 See https://tinyurl.com/yaxbscsm

References Abdallah, K. (2011), ‘Quality Problems in AVT Production Networks: Reconstructing an Actor-network in the Subtitling Industry’, in A. Şerban, A. Matamala and J.-M. Lavaur (eds), Audiovisual Translation in Close-Up: Practical and Theoretical Approaches, 173–86, Bern: Peter Lang. Baños, R. (2019), ‘Fandubbing across Time and Space: From dubbing ‘by Fans for Fans’ to Cyberdubbing’, in I. Ranzato and S. Zanotti (eds), Reassessing Dubbing: Historical Approaches and Current Trends, 145–67, Amsterdam: John Benjamins. Baños-Piñero, R. and F. Chaume-Varela (2009), ‘Prefabricated Orality: A Challenge in Audiovisual Translation’, inTRAlínea, Special Issue: The Translation of Dialects

226

The Bloomsbury Companion to Language Industry Studies

in Multimedia, Available online: http://www.intralinea.org/specials/article/ Prefabricated_Orality (accessed 13 March 2019). Beseghi, M. (2017), Multilingual Films in Translation: A Sociolinguistic and Intercultural Approach, Oxford: Peter Lang. Beuchert, K. (2017), ‘The Web of Subtitling: A Subtitling Process Model Based on a Mixed Methods Study of the Danish Subtitling Industry and the Subtitling Processes of Five Danish Subtitlers’, PhD diss., University of Aarhus, Aarhus. Burchardt, A., A. Lommel, L. Bywood, K. Harris and M. Popović (2016), ‘Machine Translation Quality in an Audiovisual Context’, Target, 28 (2): 206–21. Burton, J. (2009), ‘The Art and Craft of Opera Surtitling’, in J. Díaz-Cintas and G. Anderman (eds), Audiovisual Translation: Language Transfer on Screen, 58–70, Basingstoke: Palgrave Macmillan. Caffrey, C. (2010), ‘Relevant Abuse? Investigating the Effects of an Abusive Subtitling Procedure on the Perception of TV Anime Using Eye Tracker and Questionnaire’, PhD diss., Dublin City University, Dublin. Cary, E. (1960), ‘La traduction totale : cinema’, Babel, 6 (3): 110–5. Cerezo-Merchán, B. (2012), ‘La didáctica de la traducción audiovisual en España: un estudio de caso empírico- descriptivo’, PhD diss., University Jaume I, Castellón. Chaume-Varela, F. (2004), ‘Film Studies and Translation Studies: Two Disciplines at Stake in Audiovisual Translation’, Meta, 49 (1): 12–24. Chaume-Varela, F. (2006), ‘Dubbing’, in K. Brown (ed.), Encyclopedia of Language and Linguistics, 2nd edn, 6–9, Amsterdam: Elsevier. Chaume-Varela, F. (2007), ‘Quality Standards in Dubbing: A Proposal’, TradTerm, 13: 71–89. Chaume-Varela, F. (2012), Audiovisual Translation: Dubbing, Manchester: St Jerome. Chiaro, D. (2008), ‘Issues of Quality in Screen Translation: Problems and Solutions’, in D. Chiaro, C. Heiss and C. Bucaria (eds), Between Text and Image: Updating Research in Screen Translation, 241–56, Amsterdam: John Benjamins. de Higes Andino, I. (2014), ‘Estudio descriptivo y comparativo de la traducción de filmes plurilingües: el caso del cine británico de migración y diáspora’, PhD diss., University Jaume I, Castellón. De Marco, M. (2012), Audiovisual Translation through a Gender Lens, Amsterdam: Rodopi. De Rosa, G. L., F. Bianchi, A. De Laurentis and E. Perego, eds (2014), Translating Humour in Audioviosual Texts, Oxford: Peter Lang. Di Giovanni, E. (2016), ‘Reception Studies in Audiovisual Translation Research. The Case of Subtitling at Film Festivals’, trans-kom, 9 (1): 58–78. Di Giovanni, E. (2017), ‘New Imperialism in (Re)translation: Disney in the Arab World’, Perspectives, 25 (1): 4–17. Díaz-Cintas, J., ed. (2008), The Didactics of Audiovisual Translation, Amsterdam: John Benjamins. Díaz-Cintas, J. (2010), ‘Subtitling’, in Y. Gambier and L. van Doorslaer (eds), Handbook of Translation Studies. Volume 1, 344–9, Amsterdam: John Benjamins.

Audiovisual Translation

227

Díaz-Cintas, J., ed. (2012), ‘The Manipulation of Audiovisual Translation’, Special issue of Meta 57 (2). Díaz-Cintas, J. and P. Muñoz-Sánchez (2006), ‘Fansubs: Audiovisual Translation in an Amateur Environment’, Journal of Specialised Translation, 6: 37–52. Díaz-Cintas, J. and P. Orero (2010), ‘Voiceover and Dubbing’, in Y. Gambier and L. van Doorslaer (eds), Handbook of Translation Studies. Volume 1, 441–5, Amsterdam: John Benjamins. Díaz-Cintas, J. and A. Remael (2007), Audiovisual Translation: Subtitling, Manchester: St. Jerome. Díaz-Cintas, J., I. Parini and I. Ranzato, eds (2016), ‘Ideological Manipulation in Audiovisual Translation’, Special issue of Altre Modernità. https://riviste.unimi.it/ index.php/AMonline/issue/view/888 Ehrensberger-Dow, M. and S. O’Brien (2015), ‘Ergonomics of the Translation Workplace: Potential for Cognitive Friction’, Translation Spaces, 4 (1): 98–118. Ellender, C. (2015), Dealing with Difference in Audiovisual Translation: Subtitling Linguistic Variation in Films, Oxford: Peter Lang. Filizzola, T. (2016), ‘Italians’ Perception and Reception of British Stand-Up Comedy Humour with Interlingual Subtitles. A Qualitative and Quantitative Study on Eddie Izzard’s Shows’, PhD diss., University College London, London. Flanagan, M. (2006), ‘Recycling Texts: Human Evaluation of Example-Based Machine Translation Subtitles for DVD’, PhD diss., Dublin City University, Dublin. Franco, E., A. Matamala and P. Orero (2010), Voice-over Translation: An Overview, Bern: Peter Lang. Gambier, Y., A. Caimi and C. Mariotti, eds (2015), Subtitles and Language Learning: Principles, Strategies and Practical Experiences, Bern: Peter Lang. Georgakopoulou, P. and L. Bywood (2014), ‘MT in Subtitling and the Rising Profile of the Post-editor’, Multilingual, 25 (1): 24–8. Gottlieb, H. (1992), ‘Subtitling – A New University Discipline’, in C. Dollerup and A. Loddegaard (eds), Teaching Translation and Interpreting: Training, Talent and Experience, 161–9, Amsterdam: John Benjamins. Guillot, M.-N. (2008), ‘Orality and Film Subtitling’, The Sign Language Translator and Interpreter, 2 (2): 127–47. Han, C. and K. Wang (2014), ‘Subtitling Swearwords in Reality TV Series from English into Chinese: A Corpus-based Study of The Family’, The International Journal for Translation and Interpreting, 6 (2): 1–17. Hesse-Quack, O. (1969), Der Übertragunsprozess bei der Synchronisation von Filmen. Eine interkulturelle Untersuchung, Munich: Reinhardt. Hvelplund, K. T. (2018), ‘Eye Tracking and the Process of Dubbing Translation’, in J. Díaz-Cintas and K. Nikolić (eds), Fast-forwarding with Audiovisual Translation, 110–24, Bristol: Multilingual Matters. Hyks, V. (2005), ‘Audio Description and Translation. Two Related but Different Skills’, Translating Today, 4: 6–8.

228

The Bloomsbury Companion to Language Industry Studies

Incalcaterra Mcloughlin, L., M. Biscio and M. Á. Ní Mhainnín, eds (2011), Audiovisual Translation – Subtitles and Subtitling: Theory and Practice, Oxford: Peter Lang. Ivarsson, J. and M. Carroll (1998), Subtitling, Simrishamn: TransEdit. Karamitroglou, F. (1998), ‘A Proposed Set of Subtitling Standards in Europe’, Translation Journal, 2 (2). Karamitroglou, F. (2000), Towards a Methodology for the Investigation of Norms in Audiovisual Translation: The Choice Between Subtitling and Revoicing in Greece, Amsterdam: Rodopi. Kayahara, M. (2005), ‘The Digital Revolution: DVD Technology and the Possibilities for Audiovisual Translation Studies’, Journal of Specialised Translation, 3: 64–74. Koolstra, C. M., A. L. Peeters and H. Spinhof (2002), ‘The Pros and Cons of Dubbing and Subtitling’, European Journal of Communication, 17 (3): 325–54. Krejtz, I., A. Szarkowska and K. Krejtz (2013), ‘The Effects of Shot Changes on Eye Movements in Subtitling’, Journal of Eye Movement Research, 6 (5): 1–12. Kruger, J.-L., A. Szarkowska and I. Krejtz (2015), ‘Subtitles on the Moving Image: An Overview of Eye Tracking Studies’, Refractory: A Journal of Entertainment Media, 25: 1–14. Kuo, A. S.-Y. (2014), ‘Quality in Subtitling: Theory and Professional Reality’, PhD diss., Imperial College London, London. Laks, S. (1957), Le Sous-titrage de films. Sa technique, son esthétique, Paris: Livre d’auteur. Available online: http://ataa.fr/revue/wp-content/uploads/2013/06/ ET-HS01-complet.pdf (accessed 30 December 2018). Lång, J., J. Mäkisalo, T. Gowases and S. Pietinen (2013), ‘Using Eye Tracking to Study the Effect of Badly Synchronized Subtitles on the Gaze Paths of Television Viewers’, New Voices in Translation Studies, 10: 72–86. Luyken, G.-M., T. Herbst, J. Langham-Brown, H. Reid and H. Spinhof (1991), Overcoming Language Barriers in Television: Dubbing and Subtitling for the European Audience, Manchester: European Institute for the Media. Martí-Ferriol, J. L. (2013), ‘El método de traducción: doblaje y subtitulación frente a frente’, PhD diss., Universitat Jaume I, Castellón. Martínez-Sierra, J. J. (2008), Humor y traducción: Los Simpson cruzan la frontera, Castellón: Universitat Jaume I. Massey, G. and M. Ehrensberger-Dow (2017), ‘The Ergonomics of Professional Translation under Pressure’, in 21st World Congress of the International Federation of Translators, Brisbane, Australia, 3–5 August. Available online: https:// digitalcollection.zhaw.ch/handle/11475/3294 (accessed 30 December 2018). Massey, G. and P. Jud (2015), ‘Teaching Audiovisual Translation with Products and Processes: Subtitling as a Case in Point’, in Ł. Bogucki and M. Deckert (eds), Accessing Audiovisual Translation, 99–116, Frankfurt am Main: Peter Lang. Massidda, S. (2015), Audiovisual Translation in the Digital Age: The Italian Fansubbing Phenomenon, Basingstoke: Palgrave Macmillan.

Audiovisual Translation

229

Mereu Keating, C. (2016), The Politics of Dubbing: Film Censorship and State Intervention in the Translation of Foreign Cinema in Fascist Italy, Oxford: Peter Lang. Moran, S. (2012), ‘The Effect of Linguistic Variation on Subtitle Reception’, in E. Perego (ed.), Eye Tracking in Audiovisual Translation, 183–222, Rome: Aracne. Nikolić, K. (2015), ‘The Pros and Cons of Using Templates in Subtitling’, in R. BañosPiñero and J. Díaz-Cintas (eds), Audiovisual Translation in a Global Context. Mapping and Ever-changing Landscape, 192–202, Basingstoke: Palgrave Macmillan. O'Connell, E. M. T. (2003), Minority Language Dubbing for Children. Screen Translation from German to Irish, Oxford: Peter Lang. O’Hagan, M. (2016), ‘Massively Open Translation: Unpacking the Relationship between Technology and Translation in the 21st Century’, International Journal of Communication, 10: 929–46. Orrego-Carmona, D. (2016), ‘A Reception Study on Non-professional Subtitling: Do Audiences Notice Any Difference?’, Across Languages and Cultures, 17 (2): 163–81. Pavesi, M. (2005), La traduzione filmica: aspetti del parlato dopiatto dall'inglese all'italiano, Rome: Carocci. Pedersen, J. (2011), Subtitling Norms for Television: An Exploration Focussing on Extralinguistic Cultural References, Amsterdam: John Benjamins. Pedersen, J. (2017), ‘The FAR Model: Assessing Quality in Interlingual Subtitling’, Journal of Specialised Translation, 28: 210–29. Perego, E., ed. (2012), Eye Tracking in Audiovisual Translation, Rome: Aracne. Perego, E., F. Del Missier and M. Porta (2010), ‘The Cognitive Effectiveness of Subtitle Processing’, Media Psychology, 13 (3): 243–72. Pettit, Z. (2004), ‘The Audio-visual Text: Subtitling and Dubbing Different Genres’, Meta, 49 (1): 25–38. Pommier, C. (1988), Doublage et Postsynchronisation, Paris: Dujarric. Ramière, N. (2007), ‘Strategies of Cultural Transfer in Subtitling and Dubbing’, PhD diss., University of Queensland, Brisbane. Ramos Pinto, S. and Y. Gambier (2016), ‘Introduction’, Target, 28 (2): 185–91. Ranzato, I. (2016), Translating Culture Specific References on Television: The Case of Dubbing, London: Routledge. Roales-Ruiz, A. (2014), ‘Estudio crítico de los programas de subtitulación profesionales. Carencias en su aplicación para la didáctica. Propuesta de solución mediante conjunto de aplicaciones integradas’, PhD diss., University of Salamanca, Salamanca. Romero-Fresco, P. (2011), Subtitling through Speech Recognition: Respeaking, Manchester: St. Jerome. Romero-Fresco, P. (2013), ‘Accessible Filmmaking: Joining the Dots between Audiovisual Translation, Accessibility and Filmmaking’, Journal of Specialised Translation, 20: 201–23. Schröter, T. (2005), ‘Shun the Pun, Rescue the Rhyme? The Dubbing and Subtitling of Language-Play in Film’, PhD diss., Karlstad University, Karlstad.

230

The Bloomsbury Companion to Language Industry Studies

Szarkowska, A., I. Krejtz, O. Pilipczuk, Ł. Dutka and J.-L. Kruger (2016), ‘The Effects of Text Editing and Subtitle Presentation Rate on the Comprehension and Reading Patterns of Interlingual and Intralingual Subtitles among Deaf, Hard of Hearing and Hearing Viewers’, Across Languages and Cultures, 17 (2): 183–204. Talaván-Zanón, N. (2013), La subtitulación en el aprendizaje de lenguas extranjeras, Barcelona: Octaedro. Teixeira, C. and S. O’Brien (2017), ‘Investigating the Cognitive Ergonomic Aspects of Translation Tools in a Workplace Setting’, Translation Spaces, 6 (1): 79–103. Tomaszkiewicz, T. (1993), Les Opérations linguistiques qui sous-tendent les processus de sous-titrage des films, Poznań: Adam Mickiewicz University. Toury, G. (1995), Descriptive Translation Studies – and Beyond, Amsterdam: John Benjamins. Tribbey, C. (2017), ‘Study: EMEA Content Localization Service Spending Hits $2 Billion’, Media & Entertainment Services Alliance, 27 June. Available online: www. mesalliance.org/2017/06/27/study-emea-content-localization-service-spending-hits2-billion (accessed 30 December 2018). Wang, D. and X. Zhang (2015), ‘The Cult of Dubbing and Beyond: Fandubbing in China’, in R. Antonini and C. Bucaria (eds), Non-professional Interpreting and Translation in the Media, 173–92, Frankfurt: Peter Lang. Wilkinson-Jones, P. (2017), ‘Digital Economy Bill Will Require On-demand Programmes to Include Subtitles’, Cable.co.uk, 9 February. Available online: www. cable.co.uk/news/digital-economy-bill-will-require-on-demand-programmes-toinclude-subtitles-700001735 (accessed 30 December 2018). Williamson, L. (2016), ‘The Social Relevance of Research to Practice: A Study of the Impact of Academic Research on Professional Subtitling Practitioners in Europe’, PhD diss., Heriot-Watt University, Edinburgh. Zabalbeascoa, P. (1996), ‘Translating Jokes for Dubbed Television Situation Comedies’, The Translator, 2 (2): 235–57. Zabalbeascoa, P. (2001), ‘La traducción de textos audiovisuales y la investigación traductológica’, in F. Chaume-Varela and R. Agost-Canós (eds), La traducción en los medios audiovisuales, 49–56, Castellón: University Jaume I. Zanotti, S. (2015), ‘Analysing Redubs: Motives, Agents and Audience Response’, in R. Baños-Piñero and J. Díaz-Cintas (eds), Audiovisual Translation in a Global Context. Mapping and Ever-changing Landscape, 110–39, Basingstoke: Palgrave Macmillan.

11

Audiovisual media accessibility Anna Jankowska

1. Introduction As a society, we are very diverse and have different needs when it comes to accessibility. Even though ‘accessibility’ is most commonly associated with physical barriers that prevent people with physical impairments from entering buildings, the term refers to many different areas of life, one of which is access to audiovisual media. From the perspective of translation studies (TS), audiovisual media accessibility is seen as part of audiovisual translation (AVT). They share the aim of making audiovisual products available to those persons who would not understand them without such aids (Díaz-Cintas 2005). AVT scholars have embraced accessibility as part of their research field, seeing some of the access services as new and challenging modes of translation (Braun 2008; Gambier 2003; Remael, Reviers and Vandekerckhove 2016). Those new translation modes are perceived within the long-standing paradigm of three main types of translation: interlingual, intralingual and intersemiotic translation. While remaining within the scope of TS, this chapter approaches accessibility from a broader perspective that extends not only to translation as such but also to the entire process of providing audiovisual media access services.

1.1.  Disability and impairment Over the last few years, the language we use when talking about disability has changed significantly. Not so far back, words such as ‘handicapped’, ‘challenged’, ‘incapacitated’, ‘invalid’, ‘deficient’, ‘disabled’ and so on were commonly used. Now these are no longer part of disability etiquette. The negative or even offensive terms have been substituted by more neutral ones such as ‘impairment’ and ‘disability’. These two terms are often used interchangeably, but if we consider

232

The Bloomsbury Companion to Language Industry Studies

the findings of disability studies, we will very soon discover that they are not synonyms. To understand what they mean and how they are used, we should first look at the different models of disability. The two most widely known are the medical and the social model. The medical model sees disability as a result of a physical or mental impairment, while the social model perceives disability as created by the environment (Wasserman et al. 2016). For example, according to the medical model, a person with a visual impairment is perceived as disabled. In the social model the same person is considered to have a visual impairment that does not necessarily lead to disability, if the environment is made accessible (e.g. through the provision of audio description). Some people are sceptical about reintroducing the term ‘impaired’ into the language to refer to disability since they consider it a step back to the outdated medical model. However, according to others, ‘impairment’ is not synonymous with ‘being broken’, but reflects the diversity of the world (Ellis 2016). The remaining part of this introduction will present an extensive but necessary overview of existing access services. It will discuss audio description, audio introduction, audio subtitles, subtitling for the deaf and hard of hearing, enhanced subtitles, sign language interpreting as well as transcripts, clean audio and slow reproduction. It will then consider the broader issue of access to medium and environments.

1.2.  Audiovisual media access services Just as we have different needs when it comes to entering a building with a staircase, we have different ones regarding audiovisual media. Audiovisual media access services cater to these specific needs. For the purpose of this chapter, audiovisual media access services are classified according to the following three categories: content, medium and environment. All of the access services described below are discussed only in the context of visual and audiovisual content available in cinemas, television and online, but it should be noted that most of them are also used in other contexts, for instance, live performances, sports, museums and so forth.

1.2.1  Access to content Access to content can be provided through two types of audiovisual media access services: content-based and technology-based. Content-based audiovisual media access services are those that consist of creating new content through intralingual, interlingual or intersemiotic translation. They include audio

Audiovisual Media Accessibility

233

description, extended audio description, audio introduction, audio subtitles, subtitles for the deaf and hard of hearing, enhanced subtitles, sign language interpreting and transcripts. Technology-based audiovisual media access services are those that provide access by digitally processing existing products. To date, these services include clean audio and slow reproduction. All of these audiovisual media access services are presented in the following sections.

1.2.2.  Audio description Audio description (AD) is a verbal description of relevant visual and auditory content in visual and audiovisual media. AD is usually inserted between the soundtrack content of the audiovisual media (e.g. dialogues and sound effects). AD for audiovisual media is usually pre-recorded, but it can sometimes be semilive or live (Chmiel and Mazur 2014; Jankowska 2015). In semi-live AD, the script is prepared beforehand, but it is voiced live and some changes might occur to adapt the script to what is actually happening. Live AD – for example, for sport events – is prepared like simultaneous interpreting, being both created and delivered on the spot. AD can be voiced either by a human or by synthetic voice (HBB4ALL 2016a; Szarkowska 2011; Szarkowska and Jankowska 2012; Walczak 2017b). For foreign-language productions that are not dubbed or voiced-over, AD is usually combined with audio subtitles. Principal audiences for AD are persons with vision loss. Secondary audiences include not only people with intellectual impairment for whom AD can provide information necessary to understand and follow the plot (Jankowska, forthcoming) but also persons with autism spectrum disorder (ADS), for whom AD provides additional information about emotions and social clues (Edelberg 2018). Another possible group of secondary AD users are foreign-language learners for whom AD can become a tool that helps to experience and improve a new language (Ghia 2012; Ibáñez Moreno and Vermeulen 2013; Sadowska 2015; Walczak 2016). AD has also proved to be useful in guiding sighted viewer’s attention (Krejtz et al. 2012). It is also claimed that AD can be used by sighted users who for some reason do not wish to, or cannot, follow the visual layer of audiovisual material – in their case AD becomes an alternative to an audio book (Jankowska 2008).

1.2.3.  Extended audio description Extended audio description (EAD) is a special type of AD. The main difference between AD and EAD is that in the case of EAD visual or audiovisual media

234

The Bloomsbury Companion to Language Industry Studies

is paused (by freeze-frame). This is done to introduce a more detailed, and thus longer, description than that would normally be possible (Brewer et al. 2015; Jankowska 2015). Playback is resumed after the description has finished playing (Brewer et al. 2015). EAD is usually used for visual and audiovisual media available online, and in the future it could be used in online on-demand platforms. EAD’s principal audiences are persons with vision loss. Some sources suggest that secondary audiences can include persons with cognitive disabilities such as Asperger's Syndrome and other forms of ADS for whom EAD can provide necessary information to ‘make connections between cause and effect, point out what is important to look at, or explain moods that might otherwise be missed’ (Brewer et al. 2015).

1.2.4.  Audio introductions Audio introductions (AI) are short (usually 3–5 minutes) pieces of prose recorded usually by a single-voice narrator (Fryer 2016). Their aim is to provide visual and factual information about the visual or audiovisual material and to draw a framework for its comprehension and enjoyment. AI usually includes fuller information about the characters, costumes, cast, plot and cinematic language than is possible during the material’s running time (Fryer and RomeroFresco 2014). Principal audiences of AI are persons with vision loss. Secondary audiences are viewers without vision loss for whom AI might become a useful guide towards a fuller understanding of visual or audiovisual material.

1.2.5.  Audio subtitles Audio subtitles (AST) or spoken subtitles (SST) are a vocal rendering of written subtitles. AST are used to make foreign-language productions that are not dubbed or voiced-over accessible to individuals with vision loss (Remael 2014), and they are usually combined with AD. The subtitles are read as they are, but if copyright allows it, they might be modified to resemble spoken language more closely and include additional information (Remael 2014). AST can be delivered by either human or synthetic voice (European Broadcasting Union 2004; HBB4ALL 2016b). AST for audiovisual media are usually pre-recorded, but they are sometimes performed live by human voice talents (Jankowska 2015) or generated live using speech synthesis. There are two ways of delivering AST: the voice-over mode and the dubbing mode (HBB4ALL 2016b; ISO 2017; Iturregui-Gallardo et al. 2017; Szarkowska and Jankowska 2015). In the voice-over effect, delivery is flatter, while the dubbing effect involves interpreting and acting (Iturregui-Gallardo et al. 2017).

Audiovisual Media Accessibility

235

In the voice-over effect, the audience is able to hear the original dialogue while in the dubbing effect the original voices are not heard (Braun and Orero 2010). Whatever the choice of voice or mode, the audience should be able to clearly differentiate AD from AST. This can be achieved by applying three different strategies: (a) changing prosody when only one voice is available, (b) using at least two different voices – one male and one female (ISO 2017) – or (c) including the names of the characters in the AD script right before they speak, especially in the exposition phase of the film to help viewers learn the voices of the main characters and in scenes with many speakers (Szarkowska and Jankowska 2015). Primary audiences for AST are persons with vision loss. Secondary audiences are senior citizens, persons with dyslexia and intellectual impairment, those with multiple sclerosis or anyone who does not wish to, or cannot, read subtitles (Jankowska, forthcoming).

1.2.6.  Subtitles for the deaf and hard of hearing Subtitles for the deaf and hard of hearing (SDH) (or closed captions [CC], as they are known in the United States) are a translation of the audio-verbal and audiononverbal content of an audiovisual production. They can be both intralingual and interlingual. Apart from the verbal content, SDH also include non-verbal features such as sound effects, music, paralinguistic information and character identification. The format of these features varies not only from country to country but also from broadcaster to broadcaster. For instance, while some broadcasters identify characters by different subtitle colours, others use labels or name tags. Both interlingual and intralingual SDH are usually positioned at the bottom of the screen. They can be displaced both vertically (up and down) to avoid obscuring important information (e.g. on-screen text or characters’ mouths) and horizontally (to the right and left) to indicate the direction of offscreen sounds and identify in-vision speakers (BBC 2017; HBB4ALL 2016e). SDH can be pre-recorded, semi-live or live. Pre-recorded SDH are prepared by subtitlers in subtitling software beforehand. Semi-live SDH are used for live programmes which are heavily scripted and have pre-recorded inserts, archive footage or items from previous bulletins (European Broadcasting Union 2004: 10). In that case subtitlers ‘prepare a list of subtitles, without time-codes, and during transmission cue these manually in sync with the programme’ (European Broadcasting Union 2004: 10). Live SDH (or real-time captioning in the United States) are used to make live television broadcast accessible. Just like standard subtitles, they can be presented at the bottom of the screen as blocked text or they can be row-scrolled (Lambourne 2017). There are three ways of providing

236

The Bloomsbury Companion to Language Industry Studies

live SDH: respeaking, stenography or stenotyping. Respeaking is currently the most common method for delivering live SDH. In this process, a trained professional respeaker ‘listens to the original sound of a (live) program or event and respeaks it, including punctuation marks and some specific features for the deaf and hard of hearing audience, to a speech-recognition software, which turns the recognized utterances into subtitles displayed on the screen’ (RomeroFresco 2011: 1). The respeaking process usually involves two professionals: one to speak and one to quickly correct subtitles before broadcast. However, for some languages (e.g. English) speech recognition accuracy reaches 98 per cent, so that the corrector is sometimes omitted (Lambourne 2006). Respeaking can produce SDH at approximately 140–160 words per minute (Lambourne 2006). Stenography is machine-written shorthand and is the second most popular way of providing live SDH. It is based on phonetics and the shorthand provided by a stenographer is translated into subtitles by a computer. This is especially popular in the United States and to a lesser extent in Canada, Australia and the UK (Lambourne 2006; Romero-Fresco and Pöchhacker 2017). Stenography can provide SDH at approximately 200 words per minute. Stenotyping is the least popular method. It consists of fast typing and can be provided by two basic input methods and devices, the QWERTY keyboard and Velotype. In the first, standard QWERTY keyboards with abbreviation codes that expand automatically are used by one or two operators (dual QWERTY). With two operators working alongside each other, dual QWERTY can provide SDH at approximately 120–150 words per minute (Lambourne 2006). Velotype is a special keyboard that allows several keys to be pressed simultaneously to create syllables and words rather than typing character by character. It can have one or two operators working together and produce SDH at 140–180 words per minute (Lambourne 2006). The primary audience for SDH are individuals with hearing loss. Secondary audiences are foreign-language learners, including immigrants, and all those without hearing loss who do not wish to or cannot listen to the original soundtrack (e.g. at the airport and in a hospital). In the case of live interlingual subtitles, the audience may also comprise those who do not know the original language spoken.

1.2.7.  Enhanced subtitles Enhanced subtitles (ES) are subtitles that are enriched with additional information, such as definitions for acronyms, foreign terms, difficult language, idioms, jargon, cultural references and so on, as well as links to email addresses

Audiovisual Media Accessibility

237

or phone numbers (Brewer et al. 2015). Enhanced features can be introduced within the normal display time as a call out or overlay or as a hyperlink that pauses the main content and redirects the viewer to explanatory material (Brewer et al. 2015). ES are primarily destined for people with restricted reading capabilities, but that can also include children and foreign-language speakers.

1.2.8.  Sign language interpreting Sign language interpreting (SLI) is used to convey the spoken and written word, as well as any relevant audio information, into sign language (SL) (HBB4ALL 2016). SLI is a service created for audiences with hearing loss, be it the Deaf,1 deaf or hard of hearing. SLI is not an alternative to SDH. For many deaf and hard-of-hearing persons, and especially for the Deaf community, SL is very often their first language and the only one they are fluent in. This means that some of them are unable to follow SDH comfortably. SLI is most commonly performed by professional with or without hearing loss, but there are some attempts at introducing SLI avatars (AbilityNet 2018; European Broadcasting Union 2004; HBB4ALL 2016). There are a number of ways in which SL can be presented: (a) on the main screen (in the case of sign-presented and sign-interpreted programmes), (b) embedded in the video using the picture-in-picture technology (the signer is added in a box, circle or egg) or (c) embedded in the video using ChromaKey technology (the signer is in front of the video and blends with the video background) (European Broadcasting Union 2004; HBB4ALL 2016; Signing Books 1999). All of these methods result in part of the original image being covered. Currently, SL is provided exclusively as open signing, which means that signers are displayed as part of the image, and it is impossible to turn them off. Only very recently has the Spanish public broadcaster RTVE started to offer closed SLI on smart TVs for some of its programmes. Those who would like to turn SLI on are asked to press a special button on the remote. This opens a dedicated application on smart TVs where the currently broadcasted programme is streamed online with SLI interpreting.

1.2.9. Transcripts A transcript is a textual version of media used to make online content accessible. As such, it includes spoken audio, on-screen text, enhanced text, key visual elements and also key audio elements (e.g. sounds). It must, however, be noted that transcripts should not be considered a replacement for SDH, since SDH are preferred by most of the deaf and hard-of-hearing users. Transcripts can be

238

The Bloomsbury Companion to Language Industry Studies

presented simultaneously with the media material (e.g. in a separate window where the transcript scrolls synchronously with the material), but they must also be made available independently of the media (Brewer et al. 2015). Primary audiences of transcripts are people with hearing loss. Secondary audiences are not only users with cognitive or reading impairments, persons with vision loss or slow readers but also people without impairments who do not want or cannot listen to the audio, such as in a hospital, at the airport or due to low internet speed (Brewer et al. 2015; University of Washington n.d.).

1.2.10.  Clean audio Clean audio mix (CA) consists of enhancing the audio channel through signal processing in order to improve the intelligibility of the spoken dialogue and important non-speech information with respect to ambient noise (Brewer et al. 2015; HBB4ALL 2016c; Shirley 2013; Shirley and Kendrick 2006). CA is made available as an alternative (selectable) audio track and is primarily aimed at persons with hearing loss, including the elderly.

1.2.11.  Slow reproduction Slow reproduction (SR), also known as speech rate conversion, consists of digital signal processing that decreases the pace of speech while maintaining the original sound quality, personal features and the length of speech time of the original show (Takagi 2010; Wasoku n.d.; Watanabe 1996). SR caters principally to the needs of the elderly and is used mostly in television news broadcasts.

1.3.  Access to medium and environments Given the scope of this chapter, which focuses more on the TS perspective, media accessibility has been presented from the angle of services that provide access to content. However, what should be noticed is that audiovisual media accessibility is a more complex phenomenon and that providing accessibility to visual and audiovisual materials requires a holistic approach. Accessible content is only one of its elements, the other two being access to medium and access to environments. Access to medium is achieved through the provision of appropriate technology and ensuring that this is barrier-free. Access to environments means providing barrier-less settings, such as assisting viewers with vision loss to reach an event venue or ‘sensorial friendly’ cinema screenings for audiences with autism and Asperger’s Syndrome. During such screenings, the lights are dimmed but not

Audiovisual Media Accessibility

239

completely turned off, the sound of the film is turned down and the public is allowed to loudly comment, sing or dance (PEFRON 2018; Roxby and van Brugen 2011). Again, these services are also used by secondary audiences. To give just one example, sensorial-friendly screenings are also aimed at parents with infants. Having introduced key aspects of the broad field of media accessibility, we will now turn to research focal points in the second section of the chapter, to a discussion of how the industry influences research in the third and to a consideration of how research influences the industry in the fourth. Finally, conclusions will be drawn to support the case for a new discipline, accessibility studies.

2.  Research focal points It is very difficult to summarize, categorize or consistently describe research on audiovisual media accessibility carried out within AVT. There are at least two reasons for this. Firstly, research in this area is constant work in progress. This results in research on this topic being scattered and methodology being constantly under construction. But perhaps most importantly, it is nearly impossible to dissect media accessibility research carried out exclusively within AVT. While it is true that much, if not most, research on media accessibility is currently carried out within AVT, or with some connection to AVT, it is also becoming more heterogeneous as the field matures. Audiovisual media accessibility is a complex concept that involves ‘theories, practices, services, technologies and instruments’ (Greco 2016: 23). To cover these five areas, AVT scholars have opened up to cross and multidisciplinary research.2 AVT scholars who engage in cross-disciplinary research on audiovisual media accessibility transverse the current boundaries of TS and venture into other disciplines. Within the framework of multidisciplinary research, AVT scholars join forces with researchers and practitioners from other fields. For one thing, this means that new perspectives, points of view and methodologies are brought into the field. What is more, audiovisual media accessibility research is also carried out outside AVT and even TS (see below for more details). In view of the above, and given the scope of this chapter and volume, what follows is an overview of the research into audiovisual media accessibility conducted within the framework of AVT, written by AVT scholars and/or published in AVT/TS

240

The Bloomsbury Companion to Language Industry Studies

journals. However, things being as they are, this in neither a definite nor an exhaustive overview of audiovisual media accessibility research. For now, the research on audiovisual media accessibility carried out within AVT concentrates mainly, although not exclusively, on these access services that have a more obvious link to translation, that is to say on access to content (see Section 2.2). Audio description and pre-recorded subtitling for the deaf and hard of hearing are at the heart of AVT research. AI and live subtitling are less well researched, while extended audio description, enhanced subtitles, transcripts and sign language interpreting have so far been situated on the peripheries of AVT research. Audiovisual media accessibility research carried out within AVT can be divided into four main research avenues. The first of them is audiovisual media accessibility history. The second is the mapping of the current situation, that is to say accessibility legislation and provision. Thirdly, and the most classic, is a textual and multimodal analysis of the content-based audiovisual media accessibility services. Experimental studies – an important and recent development in audiovisual media accessibility research – make up the fourth avenue. This new approach has been implemented, in particular, to test users’ reactions to different linguistic and technical criteria through reception studies as well as to investigate the cognitive processes behind the creation of audiovisual media accessibility. Below is a brief description of the research on audio description, audio subtitling and audio introductions, as well as on pre-recorded and live subtitles for the deaf and hard of hearing.

2.1.  Audio description, audio introductions and audio subtitling The research into AD started with attempts at defining this new field of study and placing it within the boundaries of AVT (e.g. Braun 2008; Díaz-Cintas 2005). Once AD was established within the scope of AVT, various research avenues opened up. Researchers investigate AD history (e.g. Orero 2007; Jankowska 2008; Chmiel and Mazur 2011) and map the current situation in terms of AD legislation, provision and training (e.g. Utray, Pereira and Orero 2009; Reviers 2016; Jankowska and Walczak, forthcoming). An important focal point in AD research is the analysis of its linguistic, grammatical and narratological features (e.g. Salway 2007; Hurtado and Gallego 2013; Reviers 2018). However, most of the research has been carried out with the aim of answering the question of what is/should be described and how it is/should be done (Vercauteren 2007). Some of

Audiovisual Media Accessibility

241

the aspects that researchers have been focusing on are description of characters, facial expressions, emotions, gestures, lighting, music, places, text on screen, cultural references and so forth, which extends even to brand names (Matamala and Rami 2009; Matamala and Orero 2011; Igareda 2011, 2012; Maszerowska 2013; Vercauteren and Orero 2013; Matamala 2014; Szarkowska and Jankowska 2015; Jankowska and Zabrocka 2016). The leitmotif of this research is the so-called ‘objectivity paradigm’, which has caused much controversy between both practitioners and researchers (Kruger 2010; Mälzer-Semlinger 2012; Chmiel and Mazur 2014). As a result of this now concluded debate, alternative approaches to audio description have emerged. One of them is the strategybased concept proposed by the ADLAB project (Maszerowska, Matamala and Orero 2014; Remael, Reviers and Vercauteren 2014). Others include audio description that covers film language (Fryer and Freeman 2013) as well as auteur (Szarkowska and Wasylczyk 2014), and creative audio description (Walczak and Fryer 2017). As already mentioned, researchers concentrate mainly on the textual aspect of AD; however, the way it is voiced is also beginning to attract some interest. Scholars have been looking into different voicing strategies, such as synthetic voices (e.g. Szarkowska 2011; Fernández-Torné and Matamala 2015), and more recently also reading speed and intonation (Cabeza-Cáceres 2013; Jankowska et al. 2017). From the onset of AD research, audience reception studies have become an important, if not dominant, area in AD research. Numerous studies have tested both adult (Di Giovanni 2018) and non-adult (Jankowska 2015; Zabrocka 2018) audiences. AD reception research initially looked into preferences (Mazur and Chmiel 2012; Szarkowska and Jankowska 2012) and comprehension (Chmiel and Mazur 2016; Vilaro and Orero 2013). Recently, the focus has shifted towards emotional engagement (Fryer and Freeman 2014; Ramos 2015; Walczak 2017b). Research into audio introductions for media accessibility is still in its infancy, although it seems to be following a pattern similar to other media accessibility studies. Features are described, texts are analysed and audience reception studies are carried out (Di Giovanni 2014; Masłowska 2014; Romero-Fresco and Fryer 2013). To date, audio subtitling has attracted little scientific interest. Studies carried out have so far have looked into defining and describing this emerging modality as well as achieving cohesion when merging it with audio description (Benecke 2012; Braun and Orero 2010; Jankowska, Szarkowska and Mentel 2015; Remael 2012; Reviers and Remael 2015). New studies are looking into different AST voicing strategies (Iturregui-Gallardo et al. 2017).

242

The Bloomsbury Companion to Language Industry Studies

2.2.  Subtitling for the deaf and hard of hearing and live subtitling Similarly to research into AD, an important part of SDH research has looked into the history and current situation regarding both SDH provision and legislation (Pereira Rodríguez 2005; Neves and Lorenzo 2007; Utray Delgado, Pereira Rodríguez and Orero 2009; Szarkowska 2010; Muller 2012; Morettini 2014; Mliczak 2015). A substantial body of research into SDH concentrates on analysing pre-prepared subtitles. Researchers look into parameters such as placement, alignment, character identification, subtitling speed and style, sound and paralinguistic information, use of language, accuracy and strategies for dealing with foreign languages (Arnáiz-Uzquiza 2012; Carrero Leal and Souto Rico 2008; McIntyre and Lugea 2015; Tercedor Sánchez et al. 2007; Zárate Soledad 2008). An important theme in SDH research is the issue of whether subtitles should be verbatim or edited (Neves 2007, 2008; Szarkowska et al. 2011; Szarkowska, Pietrulewicz and Jankowska 2015). Reception studies have tested audience needs, especially in regard to the different technical and linguistic criteria applied. Some of the parameters tested include the use of colour, font, size, position and presentation speed (Bartoll and Martínez Tejerina 2010; Gottlieb 2015; Morettini 2012; Muller 2015; RomeroFresco 2015; Szarkowska et al. 2016; Zárate 2010). Research into live subtitling produced through respeaking is currently gaining momentum in SDH research. The main areas of interest are the analysis of re-spoken subtitles (Eugeni 2009; Romero-Fresco 2009), its quality assessment (Jekat 2014; Robert and Remael 2017; Romero-Fresco 2016; Romero-Fresco and Pöchhacker 2017) and its reception by users (Romero-Fresco 2012). Other areas that attract interest are the respeaking process (Waes, Leijten and Remael 2013) and re-speakers’ training and competences (Arumí Ribas and Romero-Fresco 2008; Chmiel, Szarkowska et al. 2017; Chmiel, Lijewska et al. 2017; Remael and van der Veer 2006). The ‘verbatim vs. edited’ debate is also taken up in respeaking (Romero-Fresco 2009), with most research having been carried out on intralingual respeaking. Research into interlingual respeaking is still in its infancy, but it is gaining momentum (e.g. Szarkowska et al. 2016; Robert and Remael 2017; Romero-Fresco and Pöchhacker 2017).

3.  Informing research through the industry The industry is aiming at mainstreaming accessibility. The challenge lies both in providing quantity and quality and, as explained in detail below, this gives

Audiovisual Media Accessibility

243

rise to considerable research. The influence of the industry is changing the AVT research panorama. Early accessibility research carried out within AVT focused on mapping the current situation and arguing for accessibility to be included within AVT. However, very soon the need to bridge the Maker-UserGap (Greco 2018) resulted in a shift towards action research, and more precisely towards experimental and reception studies that until then had been scarce in AVT. As a consequence, both the discipline and the profile of the researchers are changing, since new approaches require new means and methods. Audiovisual media accessibility research has begun to employ experimental methods such as questionnaire surveys and individual or group interviews alongside tools which include eye-tracking (Di Giovanni 2013; Szarkowska et al. 2016; Vilaro and Orero 2013), electroencephalography (Szarkowska et al. 2016), tracking electrodermal activity (Iturregui-Gallardo et al. 2017) and measuring heart rate (Rojo et al. 2014). What is more, statistical methods have been introduced to analyse the collected data. This has pressured AVT scholars engaging in the research into audiovisual media accessibility, on the one hand, to venture into cross-disciplinary research and acquire new skills and competencies and, on the other, to form multidisciplinary research teams. Thanks to this, new research avenues are opening – the latest being cognitive research into the accessibility creation process (Chmiel et al. 2017; Jankowska, Milc and Fryer 2017) However, the most important change is perhaps yet to come. Access services is a term that is rarely used in AVT research. We prefer accessibility – a term that is both broad and immaterial. The thing is that industry does not provide users with elusive accessibility, but with very tangible access services. And these are comprehensive services that include editing and managing, cataloguing and tagging, transmitting and inserting. In a real-life scenario, the provision of access services requires the cooperation of experts from various fields, for example, audio describers, translators, re-speakers, subtitlers, voice talents, audio and video engineers, information technology (IT) specialists, managers, lobbyists and lawmakers, among others. This complexity is partially reflected in research. Media accessibility has been researched outside the scope of AVT. Other research perspectives come from the fields of engineering (Ando et al. 2000; Lopez, Kearney and Hofstädter 2016; Shirley and Kendrick 2006), experimental and cognitive psychology (Holsanova 2015), sociology (Schmeidler and Kirchner 2001), deaf studies (Jenesma and Ramsey 1996) and philosophy (Greco 2016), to name just a few. Unfortunately, it seems that researchers from different fields pursue parallel tracks and rarely communicate with one another. Researchers, for the most part, seem to dissect

244

The Bloomsbury Companion to Language Industry Studies

a very small area of this complex issue and analyse it in isolation. We create an artificial, lab-like environment that is out of the real-life context. As a result, the effects of our research might not be implementable. One of the reasons is that the different disciplines have different methodologies and approaches. Disparate systems for researchers’ evaluation and publication policies are another. To give an example, policies adopted in the humanities very often penalize publications written by multiple authors. Nevertheless, it seems that researchers and practitioners involved in audiovisual media accessibility are becoming more and more aware of this artificial demarcation between disciplines and the limitations it entails. They are also beginning to notice that media accessibility is a complex concept and that conducting quality research producing results that can be implemented by the industry requires transcending the boundaries of traditional scientific disciplines and venturing not only into interdisciplinary but also into crossdisciplinary research. With all the above in mind, some researchers propose looking for a ‘bigger house’ (Romero-Fresco 2018) for both AVT and media accessibility and suggest that they could be placed under the larger umbrella of accessibility studies (Greco 2018) – an autonomous discipline focusing on investigating accessibility.

4.  Informing the industry through research The introduction of supranational3 and national4 media accessibility legislation has triggered a collateral effect between academia and the industry. Accessibility is in demand both in terms of quantity and quality as academia and the industry search for viable ways of providing both. Research into audiovisual media accessibility carried out within AVT has always been close to professional practice. Much of it was and is carried out with the purpose of finding practical solutions to practical problems. This can be explained by the fact that researchers have found themselves in a very unique position as they have witnessed the birth of a new professional discipline, one that neither has known the needs of the target audiences nor had the solutions to cater to those needs. The overall aim of much of the audiovisual media research within AVT carried out up to now has focused on improving the quality of accessible audiovisual material and creating guidelines that could be included in training, professional practice and legislation. Research has aimed at addressing the

Audiovisual Media Accessibility

245

so-called Maker-User-Gap (Greco 2018) through validation and the possible improvement of existing services as well as by systematizing practices and basing them on evidence rather than on intuition. This has been achieved, on the one hand, by textual and/or multimodal analysis and, on the other, by audience reception studies. It is truly important to highlight that the results of this research are not only suggestions that conclude scientific publications. Research results also put forward guidelines (HBB4ALL 2016d; Remael et al. 2014) and standards (ISO n.d., 2015, 2017) for the industry. Audiovisual media accessibility research carried out within AVT is not limited to assessing and testing services that have already been implemented. An important part of it was, is and will continue to be the inventing and testing of new solutions that aim at improving both the quality and quantity of media access services. Some examples are studies on implementing audio introductions (Romero-Fresco and Fryer 2013), introducing new workflows such as voicing audio description through text-to-speech (Fernández-Torné and Matamala 2015; Szarkowska 2011; Szarkowska and Jankowska 2012) or creating scripts through interlingual human (Jankowska 2015; Jankowska, Milc and Fryer 2017; Remael and Vercauteren 2010) or machine translation (Fernández Torné 2016; Fernández-Torné and Matamala 2016; Matamala and Ortiz-Boix 2016). The creation of new technological solutions such as mobile applications for reproducing alternative audio tracks in cinemas (Walczak 2017a) and developing AD/AST creation software (Jankowska et al. 2017) are other excellent examples. As already mentioned, studies on audiovisual media accessibility carried out within AVT have always been close to professional practice, and it is possible that, in the future, the cooperation will become even closer. Some AVT scholars continue to approach accessibility from the perspective of their own field of research and expertise and confine their analysis to textual content development and editing. However, others engage in research that transcends the boundaries of translation per se. Some researchers become involved in larger projects that bring together multidisciplinary research teams and the industry (e.g. HBB4ALL, AudioMovie, ImAc or EasyTV) and create solutions which meet actual needs and can be implemented in practice.

5.  Concluding remarks Audiovisual media accessibility is a dynamic field that is developing rapidly both in theory and practice. The introduction to this chapter has presented the notion

246

The Bloomsbury Companion to Language Industry Studies

of disability and impairment and discussed the currently known services that provide access to content, medium and environment for persons with different access needs. An attempt was made to show how access services created with one particular group of users in mind can be beneficial for various secondary audiences. Accessibility is often perceived as an exclusive service for a very limited group of users. A change in this perception could lead to mainstreaming access services. This chapter has discussed audiovisual media accessibility as seen and researched from the perspective of AVT and TS. At the same time, it has aimed to present a wider perspective on accessibility both in research and practice by putting on view and analysing the links established between academia and the industry. On the one hand, the industry is inspiring research trends and changing the way research is done. We have experienced an interesting evolution of methodology that has developed from descriptive studies into experimental research. On the other hand, research results are fed back to industry to fine tune their services and understand the gap between user expectations and technology. This is something that, from the perspective our field of study, is wholly exceptional. We might even say that we are witnessing the birth of a phenomena we may refer to as pragmatic humanities, an approach in humanities research that aims at researching and creating practical solutions. Academia and industry are often seen as standing on two sides of a barricade. It seems that, in the case of audiovisual media accessibility, we are actually dealing with communicating vessels that are difficult to separate. However, as both a researcher and a practitioner, I sometimes feel that both parties are trying to reinvent the wheel instead of drawing on each other’s expertise or working together towards a common goal. What becomes obvious is that approaching audiovisual media accessibility from just one field of expertise – be it audiovisual translation or engineering – is not enough to carry out quality research and provide quality access services. Both research and practice should take into consideration the numerous and intertwining aspects of media accessibility. And perhaps the best way forward is to create not only multidisciplinary teams but perhaps even a new interdisciplinary or transdisciplinary field of accessibility studies.

Acknowledgements This work has been supported by the research grant Mobility Plus ‘ECR transfer in audio description’ no. 1311/MOB/IV/2015/0 of the Polish Ministry of Science and Higher Education for the years 2016–19.

Audiovisual Media Accessibility

247

Notes 1 A distinction is made between Deaf and deaf people. ‘Deaf ’ with a capitalized ‘D’ is used to describe members of the Deaf Community – a linguistically unique community whose primary or preferred language of communication is sign language. By contrast, ‘deaf ’ refers to people with various degrees of hearing loss. 2 Based on Stember (1991), the following definitions have been adopted in this chapter. Crossdisciplinarity refers to viewing one discipline from the perspective of others. Multidisciplinarity involves two or more disciplines, each drawing from their disciplinary knowledge. Interdisciplinarity integrates the knowledge and methods of several disciplines in a synthesized approach. Transdisciplinarity creates a unity of intellectual frameworks beyond the disciplinary perspectives. 3 The Convention on the Rights of Persons with Disabilities (United Nations 2006); Directive 2010/13/EU of the European Parliament and of the Council of 10 March 2010 on the coordination of certain provisions laid down by law, regulation or administrative action in member states concerning the provision of audiovisual media services (European Parliament and Council 2010). 4 Various national legislative instruments have been introduced all over the world, for example, in Canada (Government of Canada 2005), Brazil (Government of Brazil 2015), Germany (Bundesministerium der Justiz und für Verbraucherschutz 2011), Poland (Polish Government 2011), Spain (Government of Spain 2010), United Kingdom (Government of the UK 2009), United States (Government of the United States 2010) and others. For more information on national legislation, see http:// www.mapaccess.org/accessometer

References AbilityNet (2018), ‘Avatars – Digital Signers’, 5 August. Available online: https://mcmw. abilitynet.org.uk/avatars-digital-signers/ (accessed 20 September 2018). Ando, A., T. Imai, A. Kobayashi, H. Isono and K. Nakabayashi (2000), ‘Real-time Transcription System for Simultaneous Subtitling of Japanese Broadcast News Programs’, IEEE Transactions on Broadcasting, 46 (3): 189–96. Arnáiz-Uzquiza, V. (2012), ‘Los parámetros que identifican el subtitulado para sordos. Análisis y clasificación’, MonTI Monografías de Traducción e Interpretación, 4: 103–32. Arumí Ribas, M. and P. Romero-Fresco (2008), ‘A Practical Proposal for the Training of Respeakers’, Journal of Specialised Translation, 10: 106–27. Available online: http:// www.jostrans.org/issue10/art_arumi.pdf (accessed 20 September 2018). Bartoll, E. and A. Martínez Tejerina (2010), ‘The Positioning of Subtitles for the Deaf and Hard of Hearing’, in A. Matamala and P. Orero (eds), Listening to Subtitles. Subtitles for the Deaf and Hard of Hearing, 69–86, Bern: Peter Lang.

248

The Bloomsbury Companion to Language Industry Studies

BBC (2017), ‘BBC Subtitles Guidelines’. Available online: http://bbc.github.io/subtitleguidelines/#Positioning (accessed 20 September 2018). Benecke, B. (2012), ‘Audio Description and Audio Subtitling in a Dubbing Country: Case Studies’, in E. Perego (ed.), Emerging Topics in Translation: Audio Description, 99–104, Trieste: EUT Edizioni Università di Trieste. Available online: https://www. openstarts.units.it/bitstream/10077/6363/1/Benecke_EmergingTopics.pdf (accessed 20 September 2018). Braun, S. (2008), ‘Audiodescription Research: State of the Art and Beyond’, Translation Studies in the New Millennium. An International Journal of Translation and Interpreting, 6: 14–30. Braun, S. and P. Orero (2010), ‘Audio Description with Audio Subtitling – an Emergent Modality of Audiovisual Localisation’, Perspectives. Studies in Translation Theory and Practice, 18 (3): 173–88. Brewer, J., E. Carlson, J. Foliot, G. Freed, S. Hayes, S. Pfeirrer and J. Sajka (2015), ‘Media Accessibility User Requirements’. Available online: https://www.w3.org/TR/mediaaccessibility-reqs/#transcripts (accessed 20 September 2018). Bundesministerium der Justiz und für Verbraucherschutz [Federal Ministry of Justice and Consumer Protection ] (2011), ‘Verordnung zur Schaffung barrierefreier Informationstechnik nach dem Behindertengleichstellungsgesetz (BarrierefreieInformationstechnik-Verordnung - BITV 2.0)’ [Federal Ordinance on Barrier-Free Information Technology]. Available online: https://www.gesetze-im-internet.de/ bitv_2_0/BJNR184300011.html (accessed 20 September 2018). Cabeza-Cáceres, C. (2013), ‘Audiodescripció i recepció : efecte de la velocitat de narració, l’entonació i l’explicitació en la comprensió fílmica’, PhD diss., Universitat Autonoma de Barcelona, Barcelona. Carrero Leal, J. M. and M. Souto Rico (2008), ‘Guía de buenas prácticas para el subtitulado para sordos en DVD’, in C. Jiménez Hurtado and A. Rodríguez Domínguez (eds), Accesibilidad a los medios audiovisuales para personas con discapacidad (AMADIS 07), 89–100, Madrid: Real Patronato sobre discapacidad. Chmiel, A. and I. Mazur (2011), ‘Overcoming Barriers. The Pioneering Years of Audio Description in Poland’, in A. Şerban, A. Matamala, and J.-M. Lavour (eds), Audiovisual Translation in Close-up. Practical and Theoretical Approaches, 279–98, Bern: Peter Lang. Chmiel, A. and I. Mazur (2014), Audiodeskrypcja, Poznań: Wydział Anglistyki im. Adama Mickiewicza w Poznaniu. Available online: https://repozytorium.amu.edu.pl/ handle/10593/12861 (accessed 20 September 2018). Chmiel, A. and I. Mazur (2016), ‘Researching Preferences of Audio Description Users Limitations and Solutions’, Across Languages and Cultures, 17 (2): 271–88. Chmiel, A., A. Lijewska, A. Szarkowska and Ł. Dutka (2017), ‘Paraphrasing in Respeaking – Comparing Linguistic Competence of Interpreters, Translators and Bilinguals’, Perspectives. Studies in Translation Theory and Practice, 26 (5): 1–20.

Audiovisual Media Accessibility

249

Chmiel, A., A. Szarkowska, D. Koržinek, Ł. Dutka, Ł. Brocki, K. Maraek and O. Pilipczuk (2017), ‘Ear-voice Span and Pauses in Intra- and Interlingual Respeaking: An Exploratory Study into Temporary Aspects of the Respeaking Process’, Applied Psycholinguistics, 38 (5): 1–27. Di Giovanni, E. (2013), ‘Visual and Narrative Priorities of the Blind and Non-blind: Eye Tracking and Audio Description’, Perspectives. Studies in Translation Theory and Practice, 22 (1): 136–53. Di Giovanni, E. (2014), ‘Audio Introduction Meets Audio Description: An Italian Experiment’, InTRAlinea. Online Translation Journal. Special Issue: Across Screens Across Boundaries. Available online: http://www.intralinea.org/specials/article/2072 (accessed 20 September 2018). Di Giovanni, E. (2018), ‘Audio Description for Live Performances and Audience Participation’, Journal of Specialised Translation, 29: 189–211. Available online: https://www.jostrans.org/issue29/art_digiovanni.pdf (accessed 20 September 2018). Díaz-Cintas, J. (2005), ‘Accessibility for All’, Translating Today, 4: 3–5. Edelberg, E. (2018), ‘Benefits of Audio Description, Even for Those Without Vision Loss’, 4 January. Available online: https://www.3playmedia.com/2017/03/31/ benefits-audio-description-even-without-vision-loss/ (accessed 20 September 2018). Ellis, G. (2016), ‘Impairment and Disability: Challenging Concepts of ‘Normality’’, in A. Matamala and P. Orero (eds), Researching Audio Description, 35–45, London: Palgrave Macmillan UK. Eugeni, C. (2009), ‘Respeaking the BBC News: A Strategic Analysis of Respeaking on the BBC’, The Sign Language Translator & Interpreter, 3 (1): 29–68. European Broadcasting Union (2004), ‘EBU Technical -Information I44-2004 EBU report on Access Services’. Available online: https://tech.ebu.ch/docs/i/i044.pdf (accessed 20 September 2018). European Parliament and Council (2010), ‘Directive 2010/13/EU of the European Parliament and of the Council of 10 March 2010 on the coordination of certain provisions laid down by law, regulation or administrative action in Member States concerning the provision of audiovisual media services’. Available online: https:// eur-lex.europa.eu/legal-content/EN/TXT/?uri=OJ:L:2010:095:TOC (accessed 20 September 2018). Fernández Torné, A. (2016), ‘Machine Translation Evaluation through Post-editing Measures in Audio Description’, InTRAlinea. Online Translation Journal, 18. Available online: http://www.intralinea.org/archive/article/2200 (accessed 20 September 2018). Fernández-Torné, A. and A. Matamala (2015), ‘Text-to-speech vs. Human Voiced Audio Descriptions: A Reception Study in Films Dubbed into Catalan’, Journal of Specialised Translation, 24: 61–88. Available online: http://www.jostrans.org/issue24/ art_fernandez.pdf (accessed 20 September 2018).

250

The Bloomsbury Companion to Language Industry Studies

Fernández-Torné, A. and A. Matamala (2016), ‘Machine Translation and Audio Description? Comparing Creation, Translation and Post-editing Efforts’, Skase. Journal of Translation and Interpretation, 9 (1): 64–87. Fryer, L. (2016), An Introduction to Audio Description: A Practical Guide, London/ New York: Routledge. Fryer, L. and J. Freeman (2013), ‘Cinematic Language and the Description of Film: Keeping AD Users in the Frame’, Perspectives. Studies in Translation Theory and Practice, 21 (3): 412–26. Fryer, L. and J. Freeman (2014), ‘Can You Feel What I’m Saying? The Impact of Verbal Information on Emotion Elicitation and Presence in People with a Visual Impairment’, in A. Felnhofer and O. D. Kothgassner (eds), Proceedings of the International Society for Presence Research 2014, 99–107, Wien: Facultas Verlagsund Buchhandels AG. Fryer, L. and P. Romero-Fresco, (2014), ‘Audio Introductions’, in A. Maszerowska, A. Matamala and P. Orero (eds), Audio Description: New Perspectives Illustrated, 11–28, Amsterdam/Philadelphia: John Benjamins Publishing Company. Gambier, Y. (2003), ‘Introduction’, The Translator, 9 (2): 171–89. Ghia, E. (2012), Subtitling Matters: New Perspectives on Subtitling and Foreign Language Learning, Frankfurt am Main: Peter Lang. Gottlieb, H. (2015), ‘Different Viewers, Different Needs: Personal Subtitles for Danish TV?’, in P. Romero-Fresco (ed.), The Reception of Subtitles for the Deaf and Hard of Hearing in Europe, 17–44, Bern: Peter Lang. Government of Brazil (2015), ‘Lei Brasileira de Inclusão da Pessoa com Deficiência (Estatuto da Pessoa com Deficiência)’ [Brazilian Law for Inclusion of People with Disabilities (Statute of People with Disabilities)]. Available online: http:// www.planalto.gov.br/ccivil_03/_ato2015-2018/2015/lei/l13146.htm (accessed 20 September 2018). Government of Canada (2005), ‘Accessibility for Ontarians with Disabilities Act’. Available online: https://www.ontario.ca/laws/statute/05a11 (accessed 20 September 2018). Government of Poland (2011), ‘Ustawa z dnia 25 marca 2011 r. o zmianie ustawy o radiofonii i telewizji oraz niektórych innych ustaw’ [Polish Radio and Television Act]. Available online: http://prawo.sejm.gov.pl/isap.nsf/download.xsp/ WDU20110850459/T/D20110459L.pdf (accessed 20 September 2018). Government of Spain (2010), ‘Ley 7/2010, de 31 de marzo, General de la Comunicación Audiovisual’ [7/2010, 31 March, General Act on Audiovisual Communication]. Available online: http://boe.es/boe/dias/2010/04/01/pdfs/BOE-A-2010-5292.pdf (accessed 20 September 2018). Government of the UK (2009), ‘Audiovisual Media Service Regulations’. Available online: http://www.legislation.gov.uk/uksi/2009/2979/pdfs/uksi_20092979_en.pdf (accessed 20 September 2018).

Audiovisual Media Accessibility

251

Government of the United States (2010), ‘Twenty-First Century Communications and Video Accessibility Act’. Available online: https://www.fcc.gov/general/twenty-firstcentury-communications-and-video-accessibility-act-0 (accessed 20 September 2018). Greco, G. M. (2016), ‘On Accessibility as a Human Right, with an Application to Media Accessibility’, in A. Matamala and P. Orero (eds), Researching Audio Description, 11–34, London: Palgrave Macmillan UK. Greco, G. M. (2018), ‘The Nature of Accessibility Studies’, Journal of Audiovisual Translation, 1(1): 205–32. HBB4ALL (2016), ‘Sign Language Interpretation in HBBTV’. Available online: http://pagines.uab.cat/hbb4all/sites/pagines.uab.cat.hbb4all/files/sign_language_ interpreting_in_hbbtv.pdf (accessed 20 September 2018). HBB4ALL (2016a), ‘Audio Description in HBBTV’, Available online: http://www. hbb4all.eu/wp-content/uploads/2016/12/AUDIO-DESCRIPTION-IN-HBBTV.pdf (accessed 20 September 2018). HBB4ALL (2016b), ‘Audio Subtitles in HBB TV’. Available online: http://pagines.uab. cat/hbb4all/sites/pagines.uab.cat.hbb4all/files/audio_subtitles_in_hbbtv_2017.pdf (accessed 20 September 2018). HBB4ALL (2016c), ‘Clean Audio for Improved Speech Intelligibility’. Available online: http://pagines.uab.cat/hbb4all/sites/pagines.uab.cat.hbb4all/files/clean_audio_for_ improved_speech_intelligibility.pdf (accessed 20 September 2018). HBB4ALL (2016d), ‘Accessibility Guidelines’. Available online: http://pagines.uab.cat/ hbb4all/content/accessibility-guidelines (accessed 20 September 2018). HBB4ALL (2016e), ‘Interlingual Subtitles and SDH in HBBTV’. Available online: http://pagines.uab.cat/hbb4all/sites/pagines.uab.cat.hbb4all/files/interlingual_ subtitles_and_sdh_in_hbbtv.pdf (accessed 20 September 2018). Holsanova, J. (2015), ‘A Cognitive Approach to Audio Description’, in A. Matamala and P. Orero (eds), Researching Audio Description, 49–73, London: Palgrave Macmillan UK. Hurtado, C. J. and S. S. Gallego (2013), ‘Multimodality, Translation and Accessibility: A Corpus-based Study of Audio Description’, Perspectives. Studies in Translation Theory and Practice, 21 (4): 577–94. Ibáñez Moreno, A. and A. Vermeulen (2013), ‘Audio Description as a Tool to Improve Lexical and Phraseological Competence in Foreign Language Learning’, in D. Tsgari and G. Floros (eds), Translation in Language Teaching and Assessment, 45–61, Newcastle: Cambridge Scholars Press. Igareda, P. (2011), ‘The Audio Description of Emotions and Gestures in Spanish-spoken Films’, in A. Şerban, A. Matamala and J.-M. Lavour (eds), Audiovisual Translation in Close-up. Practical and Theoretical Approaches, 223–38, Bern: Peter Lang. Igareda, P. (2012), ‘Lyrics against Images: Music and Audio Description’, MonTI. Monografías de Traducción e Interpretación, 4: 233–54.

252

The Bloomsbury Companion to Language Industry Studies

International Organization for Standardization (n.d.), ‘ISO/IEC FDIS 20071–23 Information Technology – User Interface Component Accessibility – Part 23: Guidance on the Visual Presentation of Audio Information (including Captions and Subtitles)’. Available online: https://www.iso.org/standard/70722.html (accessed 20 September 2018). International Organization for Standardization (2015), ‘ISO/IEC/TS 2007121, Information Technology — User Interface Component Accessibility — Part 21: Guidance on Audio Descriptions’. Available online: https://www.iso.org/ standard/63061.html (accessed 20 September 2018). International Organization for Standardization (2017), ‘ISO/IEC TS 20071-25:2017 Information Technology – User Interface Component Accessibility – Part 25: Guidance on the Audio Presentation of Text in Videos, including Captions, Subtitles and Other On-screen Text’. Available online: https://www.iso.org/standard/69060. html (accessed 20 September 2018). Iturregui-Gallardo, G., A. Serrano, J. L. Méndez-Ulrich, A. Jankowska and O. SolerVilageliu (2017), ‘Audio Subtitling: Voicing Strategies and Their Effect on Film Enjoyment’, in Accessibility in Film, Television and Interactive Media, York: University of York. Available online: https://ddd.uab.cat/pub/worpap/2017/181908/ Paper_conference_Accessibility_in_film_tv_and_interactive_media_Iturreguigallardo_20092017.pdf (accessed 20 September 2018). Jankowska, A. (forthcoming), ‘Mainstreaming Accessibility’. Jankowska, A. (2008), ‘Audiodeskrypcja - Wzniosły cel w tłumaczeniu’, Między Oryginałem a Przekładem, 14: 225–48. Jankowska, A. (2015), Translating Audio Description Scripts Translation as a New Strategy of Creating Audio Description, Frankfurt am Main: Peter Lang. Jankowska, A. and A. Walczak (forthcoming), ‘Filmic Audio Description in Poland Present State and Future Challenges’, Journal of Specialised Translation. Jankowska, A. and M. Zabrocka (2016), ‘How Co-Speech Gestures are Rendered in Audio Description: A Case Study’, in A. Matamala and P. Orero (eds), Researching Audio Description, 169–186, London: Palgrave Macmillan UK. Jankowska, A., M. Milc and L. Fryer (2017), ‘Translating Audio Description Scripts … into English’, Skase. Journal of Translation and Interpretation, 10 (2): 2–16. Available online: http://www.skase.sk/Volumes/JTI13/pdf_doc/01.pdf (accessed 20 September 2018). Jankowska, A., A. Szarkowska and M. Mentel (2015), ‘Why Big Fish Isn’t a Fat Cat?: Adapting Voice-over and Subtitles for Audio Description in Foreign Films’, in Ł. Bogucki and M. Deckert (eds), Accessing Audiovisual Translation, 137–48, Frankfurt am Main: Peter Lang. Jankowska, A., B. Ziółko, M. Igras-Cybulska and A. Psiuk (2017), ‘Reading Rate in Filmic Audio Description’, Rivista Internazionale Di Tecnica Della Traduzione, 19: 75–97. Jekat, S. J. (2014), ‘Evaluation of Live-subtitles’, in B. Garzelli and M. Baldo (eds), Subtitling and Intercultural Communication. European Languages and Beyond, 329–39, Pisa: ETS.

Audiovisual Media Accessibility

253

Jenesma, C. J. and S. Ramsey (1996), ‘Closed-Captioned Television Presentation Speed and Vocabulary’, American Annals of the Deaf, 141 (4): 284–92. Krejtz, I., A. Szarkowska, K. Krejtz, A. Walczak and A. Duchowski (2012), ‘Audio Description as an Aural Guide of Children’s Visual Attention’, Proceedings of the Symposium on Eye Tracking Research and Applications - ETRA ’12, 99–106, New York: ACM Press. Kruger, J.-L. (2010), ‘Audio Narration: Re-narrativising Film’, Perspectives. Studies in Translation Theory and Practice, 18 (3): 231–49. Lambourne, A. (2006), ‘Subtitle Respeaking. A New Skill for a New Age’, InTRAlinea. Online Translation Journal. Special Issue: Respeaking. Available online:http://www.intralinea.org/specials/article/Subtitle_respeaking (accessed 20 September 2018). Lambourne, A. (2017), ‘Ameliorating the Quality Issues in Live Subtitling’, in Accessibility in Film, Television and Interactive Media, York: University of York. Available online: https://www.dropbox.com/sh/08009l5g2g88i97/AAAI3b_yRkrrljN uyiPeRvP7a?dl=0&preview=Lambourne+-+Ameliorating+the+quality+issues+in+li ve+subtitling.pdf (accessed 20 September 2018). Lopez, M., G. Kearney and K. Hofstädter (2016), ‘Enhancing Audio Description: Sound Design, Spatialisation and Accessibility in Film and Television’, Proceedings of the Institute of Acoustics, 38. Available online: http://enhancingaudiodescription.com/ wp-content/uploads/2016/11/RS2016-paper-Lopez-et-al.pdf (accessed 20 September 2018). Mälzer-Semlinger, N. (2012), ‘Narration or Description: What Should Audio Description “Look” Like?’, in E. Perego (ed.), Emerging Topics in Translation: Audio Description, 29–36, Trieste: EUT Edizioni Università di Trieste. Available online: https://www.openstarts.units.it/handle/10077/6359 (accessed 20 September 2018). Masłowska, K. (2014), ‘Audiowstęp jako sposób na uzupełnienie audiodeskrypcji. Przekładaniec’, 28: 39–47. Maszerowska, A. (2013), ‘Language Without Words: Light and Contrast in Audio Description’, Journal of Specialised Translation, 20: 165–80. Available online: http:// www.jostrans.org/issue20/art_maszerowska.pdf (accessed 20 September 2018). Maszerowska, A., A. Matamala and P. Orero, eds (2014), Audio Description. New Perspectives Illustrated, Amsterdam: John Benjamins. Matamala, A. (2014), ‘Audio Describing Text on Screen’, in A. Maszerowska, A. Matamala and P. Orero (eds), Audio Description: New Perspectives Illustrated, 103–20, Amsterdam/Philadelphia: John Benjamins Publishing Company. Matamala, A. and P. Orero (2011), ‘Opening Credit Sequences: Audio Describing Films within Films’, International Journal of Translation, 23 (2): 35–58. Matamala, A. and C. Ortiz-Boix (2016), ‘Accessibility and Multilingualism: An Exploratory Study on the Machine Translation of Audio Descriptions’, TRANS. Revista de Traductología, 20: 11–24. Available online: http://www.trans.uma.es/ Trans_20/Trans_20_A1.pdf (accessed 20 September 2018).

254

The Bloomsbury Companion to Language Industry Studies

Matamala, A. and N. Rami (2009), ‘Análisis comparativo de la audiodescripción española y alemana de “Goodbye, Lenin”‘, Hermeneus. Revista de Traducción e Interpretación, 11: 249–66. Mazur, I. and A. Chmiel (2012), ‘Towards Common European Audio Description Guidelines: Results of the Pear Tree Project’, Perspectives. Studies in Translation Theory and Practice, 20 (1): 5–23. McIntyre, D. and J. Lugea (2015), ‘The Effects of Deaf and Hard-of-hearing Subtitles on the Characterisation Process: A Cognitive Stylistic Study of The Wire’, Perspectives. Studies in Translation Theory and Practice, 23 (1): 62–88. Mliczak, R. (2015), ‘Signing and Subtitling on Polish Television: A Case of (In) accessibility’, in R. Baños-Piñero and J. Díaz-Cintas (eds), Audiovisual Translation in a Global Context: Mapping an Ever-changing Landscape, 203–24, Basingstoke: Palgrave MacMillan. Morettini, A. (2012), ‘Profiling Deaf and Hard-of-Hearing Users of SDH in Italy: A Questionnaire-based Study’, MonTI. Monografías de Traducción e Interpretación, 4: 321–48. Morettini, A. (2014), ‘Legislation on Audiovisual and Media Accessibility in Italy and Beyond: Spotlight on SDH’, InTRAlinea. Online Translation Journal. Special Issue: Across Screens Across Boundaries, Available online: http://www.intralinea.org/ specials/article/legislation_on_audiovisual_and_media_accessibility_in_italy_and_ beyond (accessed 20 September 2018). Muller, T. (2012), ‘Subtitles for Deaf and Hard-of-hearing People on French Television’, in E. Di Giovanni, P. Orero and S. Bruti (eds), Audiovisual Translation across Europe. An Ever-changing Landscape, 257–74, Bern: Peter Lang. Muller, T. (2015), ‘Long Questionnaire in France: The Viewers’ Opinion’, in P. RomeroFresco (ed.), The Reception of Subtitles for the Deaf and Hard of Hearing in Europe, 163–88, Bern: Peter Lang. Neves, J. (2007), ‘Of Pride and Prejudice: The Divide between Subtitling and Sign Language Interpreting on Television’, The Sign Language Translator and Interpreter, 1 (2): 251–74. Neves, J. (2008), ‘10 Fallacies about Subtitling for the d/Deaf and the Hard of Hearing’, Journal of Specialised Translation, 10: 128–43. Available online: https://www.jostrans. org/issue10/art_neves.pdf (accessed 20 September 2018). Neves, J. and L. Lorenzo (2007), ‘La subtitulación para s/Sordos, panorama global y prenormativo en el marco ibérico’, TRANS. Revista de Traductología, 11 (2): 95–113. Orero, P. (2007), ‘Pioneering Audio Description: An Interview with Jorge Arandes’, Journal of Specialised Translation, 7: 179–89. Available online: http://www.jostrans. org/issue07/art_arandes.php (accessed 20 September 2018). Państwowy Fundusz Rehabilitacji Osób Niepełnosprawnych [State Fund for the Rehabilitation of the Disabled] (2018), ‘Kino w warunkach przyjaznych sensorycznie − dzieci, młodzież i dorośli z autyzmem i zespołem Aspergera w

Audiovisual Media Accessibility

255

Cinema City’ [Cinema in sensory-friendly conditions - children, adolescents and adults with autism and Asperger’s team in Cinema City]. Available online: https:// www.pfron.org.pl/komunikaty-z-regionu/szczegoly-komunikatu/news/kino-wwarunkach-przyjaznych-sensorycznie-dzieci-mlodziez-i-dorosli-z-autyzmem-izespolem/ (accessed 20 September 2018). Pereira Rodríguez, A. (2005), ‘El subtitulado para sordos: estado de la cuestión en España’, Quaderns. Revista de Traducció, 12: 161–72. Ramos, M. (2015), ‘The Emotional Experience of Films: Does Audio Description Make a Difference?’, The Translator, 21 (1): 68–94. Remael, A. (2012), ‘Audio Description with Audio Subtitling for Dutch Multilingual Films: Manipulating Textual Cohesion on Different Levels’, Meta: Translators’ Journal, 57 (2): 385–407. Remael, A. (2014), ‘Combining Audio Description with Audio Subtitling’, in A. Remael, N. Reviers and G. Vercauteren (eds), Picture Painted in Words ADLAB Audio Description guideline. Available online: http://www.adlabproject.eu/Docs/ adlab book/index.html#combining-ad (accessed 19 September 2018). Remael, A. and B. van der Veer (2006), ‘Real-Time Subtitling in Flanders: Needs and Teaching’, InTRAlinea. Online Translation Journal. Special Issue: Respeaking. Available online: http://www.intralinea.org/specials/article/Real-Time_Subtitling_ in_Flanders_Needs_and_Teaching (accessed 20 September 2018). Remael, A. and G. Vercauteren (2010), ‘The Translation of Recorded Audio Description from English into Dutch’, Perspectives. Studies in Translation Theory and Practice, 18 (3): 155–71. Remael, A., N. Reviers and R. Vandekerckhove (2016), ‘From Translation Studies and Audiovisual Translation to Media Accessibility’, Target, 28 (2): 248–60. Remael, A., N. Reviers and G. Vercauteren (eds) (2014), Pictures Painted in Words. Available online: http://www.adlabproject.eu/Docs/adlab book/index. html#combining-ad (accessed 19 September 2018). Reviers, N. (2016), ‘Audio Description Services in Europe: An Update’, Journal of Specialised Translation, 26: 232–47. Available online: https://www.jostrans.org/ issue26/art_reviers.pdf (accessed 20 September 2018). Reviers, N. (2018), ‘Studying the Language of Dutch Audio Description’, Translation and Translanguaging in Multilingual Contexts, 4 (1): 178–202. Reviers, N. and A. Remael (2015), ‘Recreating Multimodal Cohesion in Audio Description: A Case Study of Audio Subtitling in Dutch Multilingual Films’, New Voices in Translation Studies, 13 (1): 50–78. Robert, I. S. and A. Remael (2017), ‘Assessing Quality in Live Interlingual Subtitling: A New Challenge’, Linguistica Antverpiensia, New Series: Themes in Translation Studies, 16: 168–195. Available online: https://lans-tts.uantwerpen.be/index.php/ LANS-TTS/article/view/454/393 (accessed 20 September 2018). Rojo, A., M. Ramos Caro and J. Valenzuela (2014), ‘The Emotional Impact of Translation: A Heart Rate Study’, Journal of Pragmatics, 71: 31–44.

256

The Bloomsbury Companion to Language Industry Studies

Romero-Fresco, P. (2009), ‘More Haste Less Speed: Edited versus Verbatim Respoken Subtitles’, VIAL. Vigo International Journal of Applied Linguistics, 6: 109–33. Available online: http://vialjournal.webs.uvigo.es/pdf/Vial-2009-Article6.pdf (accessed 20 September 2018). Romero-Fresco, P. (2011), Subtitling through Speech Recognition: Respeaking, London: Routledge. Romero-Fresco, P. (2012), ‘Quality in Live Subtitling: The Reception of Respoken Subtitles in the UK’, in A. Remael, P. Orero, and M. Caroll (eds), Audiovisual Translation at the Crossroads: Media for All 3, 111–31, Amsterdam/New York: Rodopi. Romero-Fresco, P. (2015), ‘Viewing Speed in Subtitling’, in P. Romero-Fresco (ed.), The Reception of Subtitles for the Deaf and Hard of Hearing in Europe, 335–43, Bern: Peter Lang. Romero-Fresco, P. (2016), ‘Accessing Communication: The Quality of Live Subtitles in the UK’, Language & Communication, 49: 56–69. Romero-Fresco, P. (2018), ‘In Support of a Wide Notion of Media Accessibility: Access to Content and Access to Creation’, JAT Journal of Audiovisual Translation, 1 (1): 187–204. Romero-Fresco, P. and L. Fryer (2013), ‘Could Audio-Described Films Benefit from Audio Introductions? An Audience Response Study’, Journal of Visual Impairment & Blindness, 107 (4): 287–95. Romero-Fresco, P. and F. Pöchhacker (2017), ‘Quality Assessment in Interlingual Live Subtitling: The NTR Model’. Linguistica Antverpiensia, New Series – Themes in Translation Studies, 16: 149–67. Available online: https://lans-tts.uantwerpen.be/ index.php/LANS-TTS/article/view/438/402 (accessed 20 September 2018). Roxby, P. and S. van Brugen (2011), ‘Autism-friendly Film Gets People Relaxed about Cinema’. Available online: http://www.bbc.com/news/health-14494676 (accessed 20 September 2018). Sadowska, A. (2015), ‘Learning English Vocabulary from Film Audio Description: A Case of Polish Sighted Students’, Roczniki Humanistyczne, 63 (11): 101–23. Salway, A. (2007), ‘A Corpus-based Analysis of Audio Description’, in J. Díaz-Cintas, P. Orero and A. Remael (eds), Media for All: Subtitling for the Deaf, Audio Description and Sign Language, 151–74, Amsterdam: Rodopi. Schmeidler, E. and C. Kirchner (2001), ‘Adding Audio Description: Does It Make a Difference?’, Journal of Visual Impairment and Blindness, 95 (4): 197–212. Shirley, B. G. (2013), ‘Improving Television Sound for People with Hearing Impairments’, PhD diss., University of Salford, Manchester. Shirley, B. G. and P. Kendrick (2006), ‘The Clean Audio Project: Digital TV as Assistive Technology’. Available online: http://usir.salford.ac.uk/34322/ (accessed 20 September 2018). Signing Books (1999), ‘Deliverable 3.1’. Available online: http://www.sign-lang.unihamburg.de/signingbooks/deliver/d31/deliv_31_part3-2.html#3.2.2.6 (accessed 20 September 2018).

Audiovisual Media Accessibility

257

Stember, M. (1991), ‘Advancing the Social Sciences through the Interdisciplinary Enterprise’, The Social Science Journal, 28 (1): 1–14. Szarkowska, A. (2010), ‘Accessibility to the Media by Hearing Impaired Audiences in Poland: Problems, Paradoxes, Perspectives’, in J. Díaz-Cintas, A. Matamala and J. Neves (eds), New Insights into Audiovisual Translation and Media Accessibility: Media for All 2, 139–58, Amsterdam/New York: Rodopi. Szarkowska, A. (2011), ‘Text-to-speech Audio Description: Towards Wider Availability of AD’, Journal of Specialised Translation, 15: 142–62. Available online: http://www. jostrans.org/issue15/art_szarkowska.pdf (accessed 20 September 2018). Szarkowska, A. and A. Jankowska (2012), ‘Text-to-speech Audio Description of Voiced-over Films. A Case Study of Audio Described Volver in Polish’, in E. Perego (ed.), Emerging Topics in Translation: Audio Description, 81–98, Trieste: EUT Edizioni Università di Trieste. Available online: https://www.openstarts.units.it/ handle/10077/6362 (accessed 20 September 2018). Szarkowska, A. and A. Jankowska (2015), ‘Audio Describing Foreign Films’, Journal of Specialised Translation, 23: 243–69. Available online: http://www.jostrans.org/ issue23/art_szarkowska.pdf (accessed 20 September 2018). Szarkowska, A. and P. Wasylczyk (2014), ‘Audiodeskrypcja Autorska’, Przekładaniec. A Journal of Translation Studies, 28: 48–62. Szarkowska, A., J. Pietrulewicz and A. Jankowska (2015), ‘Long Questionnaire in Poland’, in P. Romero-Fresco (ed.), The Reception of Subtitles for the Deaf and Hard of Hearing in Europe, 45–74, Bern: Peter Lang. Szarkowska, A., K. Krejtz, Ł. Dutka and O. Pilipczuk (2016), ‘Cognitive Load in Intralingual and Interlingual Respeaking – a Preliminary Study’, Poznan Studies in Contemporary Linguistics, 52 (2): 209–33. Szarkowska, A., I. Krejtz, Z. Klyszejko and A. Wieczorek (2011), ‘Verbatim, Standard, or Edited?: Reading Patterns of Different Captioning Styles Among Deaf, Hard of Hearing, and Hearing Viewers’, American Annals of the Deaf, 156 (4): 363–78. Szarkowska, A., I. Krejtz, O. Pilipczuk, Ł. Dutka and J.-L. Kruger (2016), ‘The Effects of Text Editing and Subtitle Presentation Rate on the Comprehension and Reading Patterns of Interlingual and Intralingual Subtitles among Deaf, Hard of Hearing and Hearing Viewers’, Across Languages and Cultures, 17 (2): 183–204. Takagi, T. (2010), ‘Speech Rate Conversion Technology as Convenient and Understandable to All the Listeners’, Broadcast Technology, 42: 11. Available online: http://www.nhk.or.jp/strl/publica/bt/bt42/pdf/ch0042.pdf (accessed 20 September 2018). Tercedor Sánchez, M., P. Burgos Lara, D. Herrador Molina, I. Márquez Linares and L. Márquez Alhambra (2007), ‘Parámetros de análisis en la subtitulación accesible’, in C. Jiménez Hurtado (ed.), Traducción y accesibilidad. Subtitulación para sordos y audiodescripción para ciegos: nuevas modalidades de Traducción Audiovisual, 41–51, Frankfurt am Main: Peter Lang.

258

The Bloomsbury Companion to Language Industry Studies

United Nations (2006), ‘United Nations Convention and Optional Protocol on the Rights of Persons with Disabilities’. Available online: https://doi.org/UN Doc. A/61/611 (2006) (accessed 20 September 2018). University of Washington (n.d.), ‘Creating Accessible Videos | Accessible Technology’, Available online: https://www.washington.edu/accessibility/videos/ (accessed 20 September 2018). Utray Delgado, F., A. Pereira Rodríguez and P. Orero (2009), ‘The Present and Future of Audio Description and Subtitling for the Deaf and Hard of Hearing in Spain’, Meta: Translators’ Journal, 54 (2): 248–63. Vercauteren, G. (2007), ‘Towards a European Guideline for Audio Description’, in J. Díaz-Cintas, P. Orero and A. Remael (eds), Media for All: Subtitling for the Deaf, Audio Description and Sign Language, 139–50, Amsterdam: Rodopi. Vercauteren, G. and P. Orero (2013), ‘Describing Facial Expressions: Much More than Meets the Eye’, Quaderns. Revista de Traducció, 20: 187–99. Vilaro, A. and P. Orero (2013), ‘The Audio Description of Leitmotifs’, International Journal of Humanities and Social Science, 3 (5): 56–64. Waes, L., M. Leijten and A. Remael (2013), ‘Live Subtitling with Speech Recognition. Causes and Consequences of Text Reduction’, Across Languages and Cultures, 14 (1): 15–46. Walczak, A. (2016), ‘Foreign Language Class with Audio Description: A Case Study’, in A. Matamala and P. Orero (eds), Researching Audio Description, 187–204, London: Palgrave Macmillan UK. Walczak, A. (2017a), ‘Audio Description on Smartphones: Making Cinema Accessible for Visually Impaired Audiences’, Universal Access in the Information Society, 1–8. Walczak, A. (2017b), ‘Immersion in Audio Description. The Impact of Style and Vocal Delivery on Users’ Experience’, PhD diss., Universitat Autònoma de Barcelona, Barcelona. Walczak, A. and L. Fryer (2017), ‘Creative Description: The Impact of Audio Description Style on Presence in Visually Impaired Audiences’, British Journal of Visual Impairment, 35 (1): 6–17. Wasoku, E. (n.d.), ‘User-friendly Broadcasting Speech. Speech Rate Conversion’. Available online: https://www.nhk.or.jp/strl/onepoint/data/wasoku_e.pdf (accessed 20 September 2018). Wasserman, D., A. Asch, J. Blustein and D. Putnam (2016), ‘Disability: Definitions, Models, Experience’, in E. N. Zalta (ed.), The Stanford Encyclopedia of Philosophy. Available online: https://plato.stanford.edu/archives/sum2016/entries/disability/ (accessed 20 September 2018). Watanabe, K. (1996), ‘A Study on the Effect of Slower Speech Rate Produced by the Speech Rate Converter’, Nihon Jibiinkoka Gakkai Kaiho, 99 (3): 445–53. Zabrocka, M. (2018), ‘Rhymed and Traditional Audio Description According to the Blind and Partially Sighted Audience: Results of a Pilot Study on Creative Audio

Audiovisual Media Accessibility

259

Description’, Journal of Specialised Translation, 29: 212–36. Available online: http:// www.jostrans.org/issue29/art_zabrocka.pdf (accessed 20 September 2018). Zárate, S. (2010), ‘Bridging the Gap between Deaf Studies and AVT for Deaf children’, in J. Díaz-Cintas, A. Matamala and J. Neves (eds), New Insights into Audiovisual Translation and Media Accessibility. Media for All 2, 159–73, Amsterdam: Rodopi. Zárate, S. (2008), ‘Subtitling for Deaf Children on British Television’, The Sign Language Translator & Interpreter, 2 (1): 15–34.

260

12

Terminology management Lynne Bowker

1. Introduction Terminology is the discipline concerned with the collection, processing, description and presentation of terms, which are lexical items that belong to a specialized subject field (e.g. economics, law and medicine). As described by Popiolek (2015: 341), the aim of any terminology management effort is to ensure that the key terms (for a project, organization, product, brand, etc.) are maintained in some accessible system, regularly updated and consistently used by all those involved in the process. Terminology work may be carried out in a systematic or thematic fashion, where the goal is to identify all the concepts in a particular field or subfield, map out the relations between them, define them and determine the linguistic designation for each concept (in one or more languages). This type of terminology work is typically carried out by a terminologist, and the results are compiled into a glossary or a terminology database. However, terminology work can also be carried out in a more punctual or ad hoc fashion with the goal of finding an immediate solution to a particular terminological problem. This is the type of terminology work that is often carried out by translators, for example, who are trying to find an equivalent for a term that appears in the text that they are translating. Independent or freelance translators will often record their findings in a personal termbase, and translators working for an organization may contribute the results of their research to a larger corporate terminology bank. Thus while terminology work can certainly be carried out in a monolingual context, it is often closely associated with translation. Evidence of this close relationship can be seen in the fact that professional translators’ associations, such as the International Federation of Translators,1 also represent terminologists.

262

The Bloomsbury Companion to Language Industry Studies

Moreover, while there are relatively few standalone university programmes in terminology,2 terminology training is frequently incorporated into translator education programmes (Delisle 2008: 273). For instance, the 2017 competence framework developed for the European Master’s in Translation indicates that as part of translation competence, students know how to ‘acquire, develop and use thematic and domain-specific knowledge relevant to translation needs (mastering systems of concepts, methods of reasoning, presentation standards, terminology and phraseology, specialised sources, etc.)’ (EMT 2017: 8). Similarly, as reported in Hu (2018: 124), The Official Guiding Plan that sets out the programme objectives for the nearly 300 master of translation and interpreting (MTI) programmes in China emphasizes that each MTI must be designed ‘to develop students’ translation skills and competences, subject area knowledge and intellectual capacity to understand issues, ideas and values of humanity’. Therefore, in addition to building students’ language and translation competences, Hu (2018: 125) explains that these MTI programmes in China must also enable students to develop their understanding of knowledge in highly specialized domains. Given that the practices of terminology and translation are highly interwoven in many instances, it can be difficult to tease out precise information such as who does terminology work, how much time is spent specifically on terminologyrelated tasks or how much revenue is generated from terminology work proper. As an example, consider the recent profile of the Canadian language industry that was developed by collecting data from 628 language service providers (LSPs) across Canada (Malatest 2017: 7). Only 1.2 per cent of the LSPs that were surveyed indicated that terminology work constituted their main language service offering, and just 0.8 per cent of employees at the these LSPs hold the job title terminologist (Malatest 2017: 40). Meanwhile, in terms of revenue, in the year 2015, the portion of revenue in the Canadian language industry that was generated by terminology work is reported as being 0.4 per cent (Malatest 2017: 26). On the surface, it would appear that terminology occupies a rather small space within Canada’s language industry. However, a closer look reveals that terminology work is definitely being carried out, but that it is often hidden or blended with other tasks. For instance, it might fall under another heading, it might be carried out by someone with a different job title, and it might not be billed separately but instead be included in the overall price of translation or revision. As an example, we can refer to a job advertisement published in July 2017 for a translator to work for the Ministry of Education and Early Childhood Development (EECD) in Nova Scotia, Canada. Under responsibilities,

Terminology Management

263

the advertisement indicates, ‘The translator is responsible for conducting terminological and documentary research for the purposes of maintaining a linguistic database for reference purposes at the EECD.’3 Such requirements are not rare: in a study of over 300 job ads in the Canadian language industry, Bowker (2004: 963) observed that more than 15 per cent were for language professionals who were explicitly expected to carry out hybrid duties (e.g. translator–terminologist). Referring once again to the recent profile of the Canadian language industry, we learn that the majority (68.8 per cent) of LSPs reported that translation was their main area of business, but of these, 15.1 per cent specified that they also offered terminology services (Malatest 2017: 15). In addition, 37.9 per cent of respondents indicated that a terminology check is included in their qualitycontrol process (Malatest 2017: 29). Meanwhile, 25.3 per cent specified that they maintain a specific termbase for each client, and 22.0 per cent reported maintaining a general terminology bank (Malatest 2017: 29). It is clear, therefore, that terminology work is taking place, although it may be referred to as quality control or be carried out by translators. With regard to tools, 13.1 per cent of the LSPs who responded state that they use terminology management systems, as compared to 37.7 per cent who reported using translation memory systems (Malatest 2017: 31). However, once again, it is possible that the actual use of terminology tools is being masked to some degree. Typically, translation memory systems and terminology management systems are tightly integrated and used together; however, since the translation memory system is often viewed as the core component of an integrated tool suite, there is a tendency towards metonymic reference in which the term ‘translation memory’ is often used to refer to the whole integrated collection of tools (Allard 2012: 10). If the situation in Canada is somewhat unclear, the specifics relating to terminology work in the European Union (EU) are also fuzzy. In 2009, a study was commissioned by the European Commission’s Directorate-General for Translation (DGT) to determine the size of the language industry in the EU (Rinsche and Portera-Zanotti 2009). Terminology work was not broken out independently in this study, which instead considered only the categories of translation, interpretation, localization, dubbing and subtitling. However, since the DGT includes a substantial Terminology Coordination unit and manages the large and well-known term bank Interactive Terminology for Europe (IATE),4 we can be sure that terminology work is carried out in the EU, although here again, it appears to be tightly bound to translation. For instance,

264

The Bloomsbury Companion to Language Industry Studies

Stefaniak (2017: 110) observes that ‘terminology work in the EU is carried out in tight connection with the texts being translated’, and she goes on to note that ‘almost every EU institution has its own translation service and almost all of these services also do terminology work, alongside translation, organized both on the central level, in terminology coordination units, and on the local level, in language departments’. If we accept that terminology work is undeniably part of the translation process, we are still faced with the challenge of determining the precise percentage of their working time that translators devote to terminology tasks. A Canadian study on the economic value of terminology estimated that experienced translators invest 20 to 25 per cent of their working time in terminology tasks, while inexperienced translators can invest between 40 and 60 per cent (Champagne 2004: 30). Meanwhile, Fändrich (2005: 239) offers a similar perspective from Europe, reporting that ‘translators spend 20–30% of their working time researching terminology … but some assume it can be as much as 50%’. From a translator’s point of view, it makes sense to invest at least some proportion of time in managing terminology. The improved consistency that results from properly managing terminology is a key benefit because it increases the quality of the final translation. Moreover, clients often request that a translator use their preferred equivalents or their proprietary terminology. Therefore, being able to systematically manage terminology – both in general and for individual clients – is a necessity. Terminological consistency also makes it easier to implement changes or corrections (e.g. replacing one term with another) because it reduces the time needed to locate all occurrences of a term in a text. Corrections may be requested by the client or may be implemented by the translator as part of the translation or revision process. If multiple terms or variants have been used to translate a concept, implementing these corrections will be much more labour-intensive and more expensive. This type of expense is typically borne by the translator, rather than being charged to the client, because translations are usually quoted at a fixed fee per word. Finally, the other key benefit of terminology management for translators is that it saves them from repeating this research. Translators must carry out terminological investigations, but these can be time-consuming. The amount of time invested in terminological research will affect a translator’s productivity and, in the end, this will be reflected in the bottom line. Properly documenting previous searches will speed up a translator’s work if the term appears again in a text, thus saving time and money.

Terminology Management

265

While acknowledging the close ties that exist between terminology and translation, Warburton (2015: 366) also identifies a long list of applications beyond translation that can benefit from properly managed terminology, including controlled authoring, information retrieval, text summarization, content management, indexing, search engine optimization, text mining and natural language processing, among others. As was the case for translators, managing terminology presents a number of benefits for other types of companies or organizations, too. Ensuring that there is an organized and easily retrievable record of term research helps to ensure that the same term will be used consistently whenever it appears in a similar context. As summarized by Allard (2012: 39), maintaining terminological consistency in documents has multiple positive effects for a company or organization: 1. 2. 3. 4. 5. 6.

It promotes correct use of terms. It improves the quality of the final text. It reduces time and effort invested in corrections. It strengthens a company’s brand and credibility. It facilitates quality control of both products and processes. It paves the way for clearer communication both within the company and with customers. 7. It contributes to reducing customer service calls and enhancing common understanding during those calls. 8. It reduces the risk of product failure due to incorrect, ambiguous or inconsistent terminology, and in turn, reduces potential liabilities. If managing terminology has clear benefits for an organization, then not managing it can result in potentially negative effects, such as lower productivity caused by the need to repeat research and deal with increased customer service queries, lower quality texts owing to inconsistent term use, difficulty using products that have poor quality documentation, confusing internal and external communications and potential loss of sales owing to a poor company image. While it therefore seems to be obvious that terminology management is a good idea, it is nevertheless challenging to establish a precise return on investment (ROI) for managing terminology. One reason that this is not straightforward is because the terminology work may not be broken out as a separate cost but may be calculated as part of the overall translation or documentation process (Champagne 2004: 9). As a result, terminology work is often hidden and clients or employers may not be fully aware of either the benefits to be gained by implementing a terminology management strategy (e.g. enhancement of

266

The Bloomsbury Companion to Language Industry Studies

language quality, accuracy and consistency) or of the potential financial losses that can be incurred without such a strategy (e.g. lost business, wasted time, miscommunications) (Childress 2007). One way of making the value of terminology management clearer to clients or employers is to strengthen the ties between academic research and industry by undertaking applied research projects that address industry needs. It goes without saying that basic research will always be important; however, applied research can help to make the relevance and potential of research clearer to those outside academia and can lead to more productive conversations between the two groups.

2.  Research focal points While precise figures pertaining to terminology activities may be elusive, there can be no doubt that a great deal of research is going on in the field. Regular conferences, such as Terminology and Knowledge Engineering (TKE),5 Terminology and Ontology (TOTh)6 and Terminologie et intelligence artificielle (TIA),7 among others, offer venues for researchers to present new ideas. Meanwhile, the contents of the scientific journal Terminology,8 along with the Terminology and Lexicography Research and Practice9 book series, provide further evidence of dynamic efforts to advance knowledge in this field. In this short chapter, our intent is not to be exhaustive, but rather to provide some examples of active research topics in terminology and to introduce briefly some emerging areas of interest. In particular, one element that cuts across most recent areas of terminological research is the use of corpus-based techniques. Computer technology has been a driving force behind what can be considered the most significant theoretical and methodological revolution witnessed to date in terminology – the adoption of a corpus-based approach: Thanks to advances in computing and the availability of large-scale corpora, new theories have emerged: the Communicative Theory,  the Socio-Cognitive Theory,  the Lexico-Semantic Theory,  and the Textual Theory,  all of which place greater emphasis on authentic communication and less on conceptual universals. (Warburton 2017)

Of course, we must acknowledge that terminology has always been corpusbased to some degree if we consider a corpus to be simply a body of text to be consulted as a source of linguistic (and extra-linguistic) information.

Terminology Management

267

However, computer technology has greatly expanded the possibilities associated with corpus construction and analysis. For example, the EUR-Lex10 database contains documents that are freely available in twenty-four languages, including the authentic Official Journal of the European Union; EU treaties, directives, regulations, decisions; European Free Trade Agreement documents; EU case law; international agreements; summaries of EU legislation; and more. Terminologists and translators can search the broad collection using the DocHound11 tool, download the documents of interest, and then perform more focused investigations of the terminological content using tools such as Sketch Engine.12 Meanwhile, translation memory (TM) databases can also be viewed as a type of corpus, and terminologists and translators can extract information from such resources using term extractors, for example. The trend of using corpus-based techniques and tools for terminology can also be informally seen through the discussions and links found on translators’ blogs. For instance, the Wordlo13 blog maintained by Luxembourg-based translator and terminologist Maria Pia Montoro and the In my own terms14 blog maintained by the US-based translator and terminologist Patricia Brenes are just two examples of blogs aimed at language professionals that provide information about a variety of terminology tools and resources, including corpora and concordancers, among others. In addition, the practice of using electronic corpora and associated corpus analysis tools in university translator training courses, which was first introduced in the 1990s (e.g. Zanettin 1998), continues to grow (e.g. Arhire 2014; Frankenberg-Garcia 2015; Loock 2016; Corpas Pastor and Seghiri 2016). Likewise, professional associations, such as the association of Mediterranean Editors and Translators (MET),15 also offer their members professional development workshops focused on corpus use, including workshops entitled ‘Corpus-based Decision-making for Editors and Translators’, ‘A Keyword Corpus to Go: Exploring the Potential of WebBootCat’, and ‘Corpus-guided Editing and Translation (Parts 1 and 2)’. Indeed, as pointed out by Gallego-Hernández (2015: 386), who recently surveyed 526 translators in Spain to explore their use of corpora in translation practice, ‘The overall results of our survey suggest that the inclusion of skills related to the use of corpora in the design of translation courses, as well as the increasingly abundant literature on building and exploiting corpora in translation practice, are beginning to bear fruit, as almost 50% of respondents stated that they use corpora “sometimes/often/very often” in their work.’ Therefore, as the next generation of corpus-trained terminologists and

268

The Bloomsbury Companion to Language Industry Studies

translators enter the workforce, it is likely to become increasingly common to find terminological projects that have a strong corpus-based core.

2.1.  Automatic term extraction An area that has attracted significant attention is automatic term extraction. The general goal of term extraction is to identify the core vocabulary of a specialized domain, but when carried out manually, this is a time-consuming and labourintensive process. As summarized by Heylen and De Hertog (2015), automatic term extraction uses computerized techniques, such as pattern matching and probabilistic analyses, to speed up the process of identifying potential terms in a large corpus of electronic texts. Early efforts used only linguistic information (e.g. part-of-speech patterns) to identify terms, but gradually, increasingly sophisticated statistical methods have been developed to extract terms from large corpora. Today, most state-of-the-art systems are hybrids, combining both types of information. Currently, automatic term extractors focus on automating the preliminary identification of candidate terms, which must be validated by language professionals. However, Melby (2012: 17) predicts ‘the generation of preliminary termbases from multilingual corpora using sophisticated software tools will become more and more automated’, while Heylen and De Hertog (2015: 203) suggest that eventually, these tools might replace manual term extraction altogether.

2.2.  Terminological variation Another effect of the move towards corpus-based terminology is that the emphasis has shifted away from the conventional onomasiological approach, whereby researchers begin by delimiting the concept and then establish its designation, to a more semasiological approach, where they begin with the designator and then seek to define it. Because the entry point into a corpus is via a lexical item or designator, and because automatic term extractors extract lists of candidate terms, rather than concepts, terminologists are now increasingly incorporating semasiological techniques into their working methods. One result of this fundamental shift is that the notion of terminological variation has gained prominence on the research agenda. Historically, a principal goal of terminology was to achieve univocity – a situation where there was a one-toone correspondence between a concept and a term. Synonymy and polysemy were considered to be impediments to specialized communication. However,

Terminology Management

269

as pointed out by Warburton (2015: 376), companies sometimes intentionally introduce and use their own terms for a concept to support their unique brand identity, and acronyms may be used to achieve linguistic economy. Rather than seeking to eliminate variation, Warburton (2015: 377) advocates for sensible regulation. Increasingly, variation and synonymy are being recognized as a necessary, inevitable and functional aspect of terminology, and knowledge about variants is becoming important for a wide range of applications, such as controlled authoring, computer-aided translation and other forms of natural language processing. The terminology research community is responding to this call to action, and there has been a recent surge of investigations into various aspects of terminological variation, such as determining whether concepts have a terminological saturation point (Freixa and Fernández-Silva 2017), the role of metaphor in terminological variation (Rossi 2017) and techniques for automating the discovery of terminological variants (Daille 2017).

2.3.  Knowledge patterns and knowledge-rich contexts An additional line of research that has been prompted by the advent of large electronic corpora is research into knowledge patterns (e.g. Condamines et al. 2013; Marshman 2014; Schumann 2014). The fact that corpora provide access to a vast wealth of examples is both an advantage and a disadvantage. On the positive side, more information about a term and how it is used can lead to a deeper understanding and more effective communication. On the downside, language professionals may find it overwhelming to have to sift through vast volumes of text in the hopes of identifying relevant information. Not all contexts provide equally useful information, so researchers have taken up the challenge of trying to find ways to automatically identify knowledge-rich contexts. One way of doing this has been to try to identify lexical patterns that represent underlying knowledge patterns. For instance, one type of context that might be very helpful to a language professional is one that defines or explains a specialized concept. Definitions often take a form such as ‘An X is a Y that has characteristics A, B and C’. Therefore, instead of searching only for a context that contains a given term, it could be more helpful to search for a context that contains the term along with a knowledge pattern (‘is a’, ‘kind of ’, ‘sort of ’, ‘type of ’, etc.) that signals a defining context. Indeed, as researchers began to investigate knowledge patterns, it became clear that this is an exceedingly complex subject. There are different patterns that signal different types of conceptual relations (e.g. generic-specific relations, part-whole relations, causal relations and functional relations), and there are

270

The Bloomsbury Companion to Language Industry Studies

different patterns present in different subject fields, different genres of text and different languages. In addition, there are challenges to contend with, such as interrupted or non-contiguous patterns, hedges and noisy patterns (e.g. ‘A kitten is an animal’ vs. ‘A kitten is a good birthday present’). Moreover, while contexts that contain definitions are one type of useful context, they are not the only type. Researchers have invested considerable energy into creating inventories of knowledge patterns and encoding this information into systems that will allow users to create and consult corpora with contexts that are knowledge-rich, rather than laboriously sifting through large quantities of less relevant data (e.g. see the collection of papers presented in Auger and Barrière 2010). While early efforts focused on finding knowledge-rich contexts specifically for terminologists (e.g. Meyer 2001), more recent efforts address the type of knowledge-rich contexts required by translators and the tools needed to identify them automatically. Indeed, as corpora become larger and larger, the need to find ways of filtering and harnessing the content of large corpora becomes increasingly important. As described by Heylen et al. (2014: 175), discussions on the opportunities that big data offer for translation usually focus on using corpus data to improve statistical machine translation systems or term extraction systems. However, professional translators could exploit large corpora effectively if they had support to find translation examples that are informative and relevant to their current translation assignment. Heylen et al. (2014: 170) suggest that document metadata can be used to rank examples in order to allow users to focus their attention on the information likely to be the most pertinent. More recently, Picton, Planas and Josselin-Leray (2018) present a prototype interface for a translation environment that includes a window for displaying knowledge-rich contexts.

3.  Informing research through the industry On the whole, terminology is an applied discipline, and many terminology researchers seek to carry out research that addresses real-world problems. Therefore, it makes sense to begin by gaining a deeper understanding of key terminology management challenges in industry as this will serve to better inform applied terminology research. Broadly speaking, two issues that have emerged as being central in recent years are the production of terminology resources on the one hand and the use of these products on the other. Although these issues may appear simple on the surface, they are actually complex and

Terminology Management

271

currently in a state of flux within industry. Let us look at some specific issues relating to the production and use of terminology resources by way of example, and consider how these might inform research in the field.

3.1.  Expanding the community of practice With regard to production, it would be easy to assume that terminology resources are produced by terminologists. However, as we saw above, the reality is much more multifaceted. While language experts – who may include not only terminologists, but also translators, localization specialists, technical writers or linguists – frequently play a leading role, terminology work requires that linguistic skills be combined with subject-matter expertise. Although this combination of linguistic and subject-matter knowledge is crucial, language experts sometimes find it challenging to gain access to subject-matter experts. Recently, crowdsourcing has begun to emerge as a model for developing some types of terminology products. Crowdsourcing is typically understood to refer to the practice of obtaining needed services, ideas or content by soliciting contributions from a large group of people – and especially from the online community – rather than from traditional employees or suppliers. Adopting a crowdsourcing model that uses a ‘closed crowd’ (i.e. a ‘crowd’ whose membership is limited to vetted members) can facilitate access to subject-matter experts and result in higher-quality terminology products. Karsch (2015: 298) suggests that a selection of terminological activities can be usefully outsourced to a crowd, such as recommending or validating candidate terms, or voting on a preferred term from among a group of synonyms. One such example of crowdsourcing in a terminology context is the Microsoft Terminology Community Forum (MTCF), which is described by DePalma and Kelly (2011: 389-390) as an initiative that benefits end users and helps to improve the company’s products. The MTCF enables community members to develop, discuss and approve new terminology for Microsoft products. It also allows product groups and individual localization teams to obtain user feedback on legacy and new terminology. The forum organizers can open the MTCF portal to the general public or limit it to a subject-matter expert community on an invitation-only basis. Participants can provide feedback on a range of terminology-related issues, such as outdated terms, unclear definitions or preferred foreign-language equivalents. By leveraging the expertise of subjectmatter experts and customers and giving them a voice, product uptake can be increased and user satisfaction improved.

272

The Bloomsbury Companion to Language Industry Studies

While recognizing the potential of crowdsourcing in some situations, Karsch (2015: 299) nonetheless cautions that some terminology tasks might not be best suited to crowdsourcing, including collecting information, drafting term records and releasing entries. Overall, crowdsourcing is a relatively new approach in terminology, which means that there is both scope and need for more research and development in this area. There are many examples of questions that require more investigation. What role should a language professional play in a crowdsourced terminology project (e.g. observer, moderator or manager)? What criteria should be used to select crowd members? Is there a place for an open crowd or a hybrid crowd in terminology research? What is the optimal size for a crowd? Who is ultimately responsible for making a decision using crowd-based evidence? Under what circumstances could a language professional overrule the opinion of a crowd? What types of tools are needed to manage a crowdsourced project? A deeper understanding of the players involved, as well as of their roles and responsibilities in producing terminology resources, will help to inform a research agenda that could include, for example, the design and development of tools and methods to facilitate and better support the collaborative production of useful terminology products.

3.2.  Expanding the notion of ‘useful’ content in term records Another way that terminology research can be informed by practice is by considering what type of information is being stored in terminology resources. When working language professionals build termbases, what type of information do they include and what do they leave out? Terminology textbooks are filled with descriptions of what an ideal term record should look like and how information should be selected for inclusion in the record. These textbooks are then used to train the next generation of terminologists and translators. However, evidence is emerging which suggests that some of the well-intentioned teachings acquired in the classroom are being set aside once language professionals enter the workplace and are confronted with situations that are broader than, different from or simply messier and more complex than neat and tidy textbook examples. In particular, new technologies have contributed significantly to changes in working methods, but awareness of these changes has not necessarily filtered back to the classroom. As summarized by Warburton (2015: 363), the literature on terminology has a largely academic focus that alienates it somewhat from industrial applications, with the result that terminology continues to be perceived as an academic pursuit:

Terminology Management

273

Having developed over the past half-century under the influence of the traditional theory of terminology, recognized methods for managing terminology are largely normative, prescriptive and onomasiological. Granted, when terminology management was carried out for language planning or academic purposes, such methods were appropriate. However, changes in information technologies, coupled with the expansion of global, linguistically-diversified markets, are creating conditions where commercial enterprises are needing to manage terminology. […] But are the conventional methods and theoretical principals for managing terminology suitable for the intense production-oriented climate in companies? (45Warburton 2015: 361)

Warburton (2015: 361) goes on to claim that the motivation for managing terminology in a commercial environment differs significantly from the one that applies in academic environments, and she argues that more investigations are needed to understand terminology management from a commercial perspective. One key issue that is subject to ongoing debate is what type of information should be recorded in a termbase. Traditionally, the principal item recorded on a term record is the term itself, which is understood to be a lexical item that designates a concept in a specialized subject field. However, there is evidence that, in practice, language professionals are not limiting themselves to recording items that correspond to this conventional notion of termhood. Rather, other criteria are being used as well, such as frequency of occurrence, translation difficulty, visibility (i.e. how prominently a linguistic expression appears in company materials) and embeddedness (i.e. a lexical item’s productivity in forming longer multiword units). Moreover, in cases where the termbase is integrated with other tools, such as a word processor or TM system, language professionals may even include items simply because it will be easier to have them inserted into the text automatically, rather than having to type out or edit this information. Similarly, language professionals are increasingly turning to automatic term extraction systems to identify content for inclusion in termbases (Heylen and De Hertog 2015). For instance, when translators receive new texts to translate, they can begin by running these texts through an automatic term extraction system, which will attempt to identify term candidates that can be included in a termbase. However, at present, term extractors focus on identifying terms. If, as we have seen above, language professionals find it useful to include other types of lexical units in a termbase, then it would be helpful if researchers and developers could adapt term extractors or build complementary tools that could be used to identify these other types of information that language professionals

274

The Bloomsbury Companion to Language Industry Studies

find useful. To do this, however, researchers and developers would first need to gain a better understanding of what these language professionals are actually storing in their termbases, and why they consider this information to be useful.

3.3.  Considering the User eXperience If researchers currently have a limited understanding of the way that terminology resources are produced in industrial settings, it is also fair to say that the way that terminology products are used in industry is not yet fully understood, either. User studies are gaining importance in industries around the globe. The concept of User eXperience, which has its origins in the field of human–computer interaction (HCI), has spread to other industries and is helping to shine a spotlight on the importance of ensuring that a product, system or service meets its intended users’ expectations with regard to functionality, ease of use and overall experience. Serious user studies of lexicographical resources first began to appear in the 1980s, and they have become increasingly common in recent years (e.g. Lew 2011; Nesi 2013) as new tools and techniques for conducting user studies have been developed (e.g. ethnographic studies, process-oriented studies). However, user studies in terminology are relatively rare and those that exist (e.g. Duran-Muñoz 2012) have uncovered various types of dissatisfaction with existing terminology resources (e.g. limited search options, lack of ability to limit searches to certain documents and lack of a feedback feature). Therefore, a deeper understanding of questions such as the following would be welcome: Who uses terminology products (and who does not)? When, why, how and what for? Do these products meet their needs, and if not, what are the gaps or shortcomings? Research questions should address both the content of the terminology resources and the tools used to access and present the content. A related notion that is not yet well understood is that of workflow. At what point(s) in the overall document production process is terminology work carried out? How well are the tools used to support terminology work integrated with other tools? With regard to both production and use of terminographic resources, it is important to recognize that terminology work and products do not stand in isolation. The development, management and use of these resources take place in an environment where they are intrinsically linked to other processes (e.g. product development, quality assurance, marketing, translation and localization), as well as to other tools and resources (e.g. controlled authoring, QA checking, TM and machine translation systems). A better understanding of where terminology work fits in relation to these other processes and tools will

Terminology Management

275

help to guide researchers and developers in deciding whether targeted tools and products are more useful, or whether repurposing, multipurposing or ‘one-stopshopping’ solutions are feasible. Finally, it is worth emphasizing that a deeper awareness of the terminologyrelated challenges faced by companies, the tools and working methods used in a commercial environment, and the contents of the terminology resources that are created would provide valuable knowledge that could be used to inform research into terminology and translation pedagogy, for example, in order to better prepare the next generation of language professionals for the realities of the twenty-first-century language industry (e.g. see the contents of Alcina 2011). In summary, Warburton (2015: 389) notes that ‘terminology as a discipline and as a vocation is slowly but surely making its entry into the world of business’. Though by no means an exhaustive list, the examples discussed in this section paint a picture of an evolving situation with regard to both the production and the use of terminology resources in commercial environments. Empirical research into terminology management that is based on industry needs and challenges can feed back into the development of a theoretical and methodological framework to meet these challenges. In this way, it becomes clear that industry can inform research in valuable ways.

4.  Informing the industry through research While the previous section outlined some ideas for how terminology research can be informed by industrial practice, this section will focus on how the results of research can in turn feed back into the language industry. For instance, while companies may have a reasonable idea of their own processes, they may not be aware of other ways of approaching terminology management. Moreover, it may not be feasible for individual companies to conduct broader or comparative investigations of tools or methods since they need to focus on their primary business. However, these types of large-scale or comparative studies can be tackled by researchers and could potentially lead to the identification of best practices or could inform the development of language-related policies. For instance, Allard (2012) is one researcher who conducted an extensive surveybased analysis of how terminology tools are implemented in the translation industry. She identified best practices and proposed a set of guidelines on how best to optimize the design and use of integrated termbases in a translation environment.

276

The Bloomsbury Companion to Language Industry Studies

Crowdsourcing was mentioned above as a type of industry practice that can inform research, but this research can also feed back usefully into practice. For instance, one researcher who is actively investigating crowdsourcing in the context of a doctoral thesis is Elizabeth Saint of the University of Ottawa in Canada. As part of her research project En bons termes,16 Saint (2018) uses crowdsourcing to identify preferred French-language equivalents for English terms and provides the results to professional terminologists working for Radio Canada, the Office québécois de la langue française (OQLF) and the Canadian federal government’s Translation Bureau for their consideration. Meanwhile Warburton (2014a) examined the significant gaps that exist between the content of termbases and corresponding corpora in four information technology companies. She determined that such gaps reduce the effectiveness of the termbases in commercial environments and that more attention should be paid to corpus-based techniques. Accordingly, she formulated a number of guiding principles for selecting terms that will increase the value of commercial termbases. Terminometrics is another active research area that has the potential to result in guidelines that will help language professionals when they are faced with the choice of selecting a preferred term from among a group of competing terms. Terminometric research first emerged in response to the need to measure term implantation in order to assess the results of language planning initiatives and to be able to improve terminological suggestions and guidelines according to factors that appear to affect term implantation (Quirion 2003). The use of terminometrics in this type of formal language planning setting is now established. However, language planning institutions (e.g. the OQLF) no longer have the same degree of influence over the production and dissemination of linguistic information as they did in the past. For example, widespread internet use, the digitization of information and the technical possibilities offered by the internet that reduce the need for major publishing and distribution mechanisms have all contributed to changing the way that content is produced and disseminated. Individuals or groups now have greater potential to influence language use, and the previously accepted role of language planning institutions as the principal determinant of language use is being called into question. Accordingly, a new avenue of terminometric research is beginning to emerge, which shifts the focus from the institution to the community, aligning with the global shift in content production and distribution. For instance, Bilgen (2016) carried out a terminometric analysis

Terminology Management

277

on the terminology used in the discussions on online forums in the open-source software community. The study provides insight into online collaboration in the context of localization and points out correlations between term formation patterns and term implantation. Such observations can mark a starting point for terminological decision-making that is informed by user behaviour and may thus improve the reception of localized content by adapting to users’ terminological expectations. It was mentioned above that while corpus-based approaches offer many advantages, the volume of texts available has grown so large that language professionals risk being overwhelmed with information. Research into tools and techniques that can be used to harness or filter information for terminological purposes stands to contribute to the development of increasingly useful terminographic tools and resources. For instance, Picton, Planas and JosselinLeray (2018) have developed a prototype interface that would highlight knowledge-rich contexts, while Heylen et al. (2014) have integrated document metadata features in a tool that will make it easier to ensure that terminology is drawn from the most relevant sources available. The addition of such features to terminology tools will make it easier for practising language professionals to work efficiently and effectively. Meanwhile, various types of research results have contributed to a richer and broader spectrum of information being made available in a variety of terminology resources. For instance, the JuriDico legal termbase uses the principles of frame semantics in order to present users with groups of related legal terms and also places a strong emphasis on verbs, which have traditionally been under-represented in terminology resources (Pimentel 2015). Similarly, Marshman (2014) introduces a resource called CREATerminal that presents terminological relations in the form of knowledge-rich contexts – information that has traditionally been employed by language professionals in the terminological analysis stage but less often displayed for the benefit of the end users of terminology resources. In addition to being directly useful in the language industries (e.g. translation, localization and technical writing), terminology-oriented research is also attracting attention for use in other types of applications. For instance, Bowker and Delsey (2016) describe how bi- or multilingual terminology resources are being used in information science to improve cross-language information retrieval tools. Similarly, automatic term extraction techniques are being incorporated into automatic indexing tools. These types of advances

278

The Bloomsbury Companion to Language Industry Studies

could eventually feed back into the language industries. To give a final example, enhanced indexing and search techniques might lead to improvements in language industry tools such as automatic term extractors, TM systems or machine translation systems.

5.  Concluding remarks In summary, while terminology was previously considered to be largely an academic domain, it has recently begun to occupy a more prominent place in commercial environments, too. Nonetheless, some aspects of terminology work still remain hidden or hard to access, which means that members of these two communities could benefit from an increased dialogue and a better understanding of each other’s activities. While this short chapter cannot provide a comprehensive description of all types of terminological activities, we hope that it has demonstrated some of the ways in which terminology research can be informed by industrial practice, and how these research results can in turn feed back into the language industry. In this way, we can work towards the goal of developing a virtuous cycle that benefits both researchers and practitioners. Additional topics that we did not have the space to cover here, but which interested readers might like to explore, include designing terminology resources (e.g. Warburton 2017), terminology exchange (e.g. Melby 2015), terminology and ontologies (e.g. Duran-Muñoz and Bautista-Zambrana 2013) and terminology and controlled authoring (e.g. Warburton 2014b), among others. Clearly terminology management is a dynamic and evolving area of the language industries with exciting possibilities for even more collaborative work to be done by researchers and practitioners.

Notes 1 The website of the International Federation of Translators states: ‘FIT (Fédération internationale des traducteurs / International Federation of Translators) is an international grouping of associations of translators, interpreters and terminologists’ (http://www.fit-ift.org/ accessed 20 July 2017). 2 Some notable exceptions include the online master’s programme in terminology offered by the Universitat Pompeu Fabra in Spain (https://www.upf.edu/en/ web/terminologiaonline accessed 26 May 2018), and the master’s programme in terminology and the management of specialized information at the Universidade

Terminology Management

279

Nova de Lisboa in Portugal (http://www.unl.pt/guia/2014/fcsh/UNLGI_ getCurso?curso=834 accessed 26 May 2018). 3 Example of a job advertisement for a translator expected to carry out terminology-related duties (https://jobs.novascotia.ca/job/HALIFAX-Agentd'administration-du-programme-1-2-%28Traducteur%29-ProgramAdmin-Officer-1-2-%28Translator%29-NS-B3J-2S9/354596217/ accessed 20 July 2017). 4 The website of the Terminology Coordination unit (http://termcoord.eu/ accessed 19 August 2017). 5 Information on the series of Terminology and Knowledge Engineering (TKE) conferences can be found here: https://sf.cbs.dk/gtw/conferences_terminology_ and_knowledge_engineering (accessed 21 July 2017). 6 Information on the series of Terminology and Ontology: Theory and applications (TOTh) conferences can be found here: http://toth.condillac.org/ (accessed 6 July 2018). 7 Information on the series of Terminologie et intelligence artificielle (TIA) conferences can be found here: http://www.ltt.auf.org/rubrique.php3?id_ rubrique=151 (accessed 21 July 2017). 8 Information on the international scientific journal Terminology (John Benjamins Publishing Company) can be found here: https://benjamins.com/#catalog/journals/ term/main (accessed 21 July 2017). 9 Information on the Terminology and Lexicography Research and Practice book series (John Benjamins Publishing Company) can be found here: https://benjamins. com/#catalog/books/tlrp/main (accessed 21 July 2017). 10 Information on EUR-Lex can be found here: http://eur-lex.europa.eu/content/ welcome/about.html (accessed 26 May 2018). 11 Information on DocHound can be found here: http://termcoord.eu/dochound/ (accessed 26 May 2018). 12 Information on Sketch Engine can be found here: https://www.sketchengine.eu/ (accessed 26 May 2018). In addition to allowing users to search existing corpora, Sketch Engine can also be used to help create corpora (e.g. from the Web or from translation memory databases). 13 The Wordlo blog can be found here: http://recremisi.blogspot.com/p/onlineterminology-tools.html (accessed 6 July 2018). 14 The In my own terms blog can be found here: http://inmyownterms.com/readingstools-and-useful-links-for-corpus-analysis/ (accessed 6 July 2018). 15 Information on the corpus-based workshops organized by the association of Mediterranean Editors and Translators (MET) can be found here: https://www. metmeetings.org/en/all-workshops:54 (accessed 6 July 2018). 16 More information on the En bons termes project can be found at: http:// enbonstermes.com/ (accessed 6 July 2018).

280

The Bloomsbury Companion to Language Industry Studies

References Alcina, A., ed (2011), Teaching and Learning Terminology: New Strategies and Methods, Amsterdam: John Benjamins. Allard, M. G. P. (2012), ‘Managing Terminology for Translation Using Translation Environment Tools: Towards a definition of best practices’, PhD diss., University of Ottawa, Ottawa. Arhire, M. (2014), Corpus-based Translation for Research, Practice and Training, Iaşi: Institutul European. Auger, A. and C. Barrière, eds (2010), Probing Semantic Relations: Exploration and Identification in Specialized Texts, Amsterdam: John Benjamins. Bilgen, B. (2016), ‘Localization and Terminometrics: Measuring the impact of user involvement in terminology’, PhD diss., University of Ottawa, Ottawa. Bowker, L. (2004), ‘What Does It Take to Work in the Translation Profession in Canada in the 21st Century? Exploring a Database of Job Advertisements’, Meta, 49 (4): 960–72. Bowker, L. and T. Delsey (2016), ‘Information Science, Terminology and Translation Studies: Adaptation, Collaboration, Integration’, in Y. Gambier and L. van Doorslaer (eds), Border Crossings: Translation Studies and Other Disciplines, 73–95, Amsterdam: John Benjamins. Champagne, G. (2004), The Economic Value of Terminology: An Exploratory Study, Report commissioned by the Translation Bureau of Canada. Childress, M. D. (2007), ‘Terminology Work Saves More than It Costs’, MultiLingual, April/May 18 (3): 43–6. Condamines, A., A. Josselin-Leray, C. Fabre, L. Lefeuvre, A. Picton and J. Rebeyrolle (2013), ‘Using Comparable Corpora to Characterize Knowledge-Rich Contexts for Various Kinds of Users: Preliminary Steps’, in 5th International Conference on Corpus Linguistics (CILC2013): Procedia - Social and Behavioral Sciences, 95: 581–6. doi: 10.1016/j.sbspro.2013.10.685. Corpas Pastor, G. and M. Seghiri, eds (2016), Corpus-based Approaches to Translation and Interpreting: From Theory to Applications, Frankfurt am Main: Peter Lang. Daille, B. (2017), Term Variation in Specialised Corpora: Characterisation, Automatic Discovery and Applications, Amsterdam: John Benjamins. Delisle, J. (2008), La terminologie au Canada: Histoire d’une profession, Montreal: Linguatech. DePalma, D. A. and N. Kelly (2011), ‘Project Management for Crowdsourced Translation: How User-translated Content Projects Work in Real Life’, in K. J. Dunne and E. S. Dunne (eds), Translation and Localization Project Management: The Art of the Possible, 379–407, Amsterdam: John Benjamins. Duran-Muñoz, I. (2012), ‘Meeting Translators’ Needs: Translation-oriented Terminological Management and Applications’, Journal of Specialised Translation, 18: 77–92. Available online: http://www.jostrans.org/issue18/art_duran.php (accessed 19 August 2017).

Terminology Management

281

Duran-Muñoz, I. and M. R. Bautista-Zambrana (2013), ‘Applying Ontologies to Terminology: Advantages and Disadvantages’, Hermes – Journal of Language and Communication in Business, 51: 65–77. European Master’s in Translation (EMT) (2017), European Master’s in Translation Competence Framework 2017. Brussels: European Commission. Available online: https://ec.europa.eu/info/resources-partners/european-masters-translation-emt/ european-masters-translation-emt-explained_en (accessed 26 May 2018). Fändrich, U. (2005), ‘Terminology Project Management’, Terminology, 11 (2): 225–60. Frankenberg-Garcia, A. (2015), ‘Training Translators to Use Corpora Hands-on: Challenges and Reactions by a Group of Thirteen students at a UK University’, Corpora, 10 (3): 351–80. Freixa, J. and S. Fernández-Silva (2017), ‘Terminological Variation and the Unsaturability of Concepts’, in P. Drouin, A. Francoeur, J. Humbley and A. Picton (eds), Multiple Perspectives on Terminological Variation, 156–80, Amsterdam: John Benjamins. Gallego-Hernández, D. (2015), ‘The Use of Corpora as Translation Resources: A Study Based on a Survey of Spanish Professional Translators’, Perspectives: Studies in Translation Theory and Practice, 23 (3): 375–91. Heylen, K. and D. De Hertog (2015), ‘Automatic Term Extraction’, in H. J. Kockaert and F. Steurs (eds), Handbook of Terminology, Vol. 1, 203–21, Amsterdam: John Benjamins. Heylen, K., S. Bond, D. De Hertog, H. Kockaert, F. Steurs and I. Vulić (2014), ‘TermWise: Leveraging Big Data for Terminological Support in Legal Translation’, in Proceedings of Terminology and Knowledge Engineering (TKE) 2014, Berlin, 19–21 June 2014, 167–76. Available online: https://tke2014.coreon.com/ (accessed 19 August 2017). Hu, W. (2018), Education, Translation and Global Market Pressures: Curriculum Design in China and the UK, Singapore: Palgrave Macmillan. Karsch, B. I. (2015), ‘Terminology Work and Crowdsourcing’, in H. J. Kockaert and F. Steurs (eds), Handbook of Terminology, Vol. 1, 291–303, Amsterdam: John Benjamins. Lew, R., ed. (2011), ‘Studies in Dictionary Use: Recent Developments (Special Issue)’, International Journal of Lexicography, 24 (1). Loock, R. (2016), La traductologie de corpus, Villeneuve D’Ascq: Presses universitaires du Septentrion. Malatest, R. A. (2017), ‘Profile of the Canadian Language Industry: Final Report’, Public Services and Procurement Canada. Available online: http://epe.lacbac.gc.ca/100/200/301/pwgsc-tpsgc/por-ef/public_services_procurement_ canada/2017/2017-07-e/index.html (accessed 19 August 2017). Marshman, E. (2014), ‘Enriching Terminology Resources with Knowledge-rich Contexts’, Terminology, 20 (2): 225–49.

282

The Bloomsbury Companion to Language Industry Studies

Melby, A. K. (2012), ‘Terminology in the Age of Multilingual Corpora’, Journal of Specialised Translation, 18: 7–29. Available online: http://www.jostrans.org/issue18/ art_melby.php (accessed 26 May 2018). Melby, A. K. (2015), ‘TBX: A Terminology Exchange Format for the Translation and Localization Industry’, in H. J. Kockaert and F. Steurs (eds), Handbook of Terminology, Vol. 1, 393–424, Amsterdam: John Benjamins. Meyer, I. (2001), ‘Extracting Knowledge-Rich Contexts for Terminography: A Conceptual and Methodological Framework’, in D. Bourigault, C. Jacquemin and M.-C. L’Homme (eds), Recent Advances in Computational Terminology, 279–302, Amsterdam: John Benjamins. Nesi, H. (2013), ‘Researching Users and Uses of Dictionaries’, in H. Jackson (ed.), The Bloomsbury Companion to Lexicography, 62–74, London: Bloomsbury. Picton, A., E. Planas and A. Josselin-Leray (2018), ‘Monitoring the Use of Newly Integrated Resources into CAT Tool: A Prototype’, in G. Corpas Pastor and I. Duran-Muñoz (eds), Trends in E-tools and Resources for Translators and Interpreters, 109–36, Amsterdam: Brill-Rodopi. Pimentel, J. (2015), ‘Using Frame Semantics to Build a Bilingual Lexical Resource on Legal Terminology’, in H. J. Kockaert and F. Steurs (eds), Handbook of Terminology, Vol. 1, 427–50, Amsterdam: John Benjamins. Popiolek, M. (2015), ‘Terminology Management within a Translation Quality Assurance Process’, in H. J. Kockaert and F. Steurs (eds), Handbook of Terminology, Vol. 1, 341–59, Amsterdam: John Benjamins. Quirion, J. (2003), ‘Methodology for the Design of a Standard Research Protocol for Measuring Terminology Usage’, Terminology, 9 (1): 29–49. Rinsche, A. and N. Portera-Zanotti (2009), ‘The Size of the Language Industry in the EU. European Commission Directorate-General for Translation’. Available online: https://publications.europa.eu/en/publication-detail/-/publication/ef5bebf0-a9014df1-8cd4-d2e77cfa1841 (accessed 19 August 2017). Rossi, M. (2017), ‘Terminological Metaphors and the Normalisation of Specialised Terms: Some Observations on Intralinguistic and Interlinguistic Variation’, in P. Drouin, A. Francoeur, J. Humbley and A. Picton (eds), Multiple Perspectives on Terminological Variation, 182–212, Amsterdam: John Benjamins. Saint, E. (2018), ‘Ce que la collaboration “En bons termes” entre locuteurs et terminologues peut apporter à la formation universitaire’, in Proceedings of the 87e Congrès de l’ACFAS – Enseigner la terminologie aujourd’hui : Enjeux, besoins, priorités et bonnes pratique, 10–11 May 2018, Chicoutimi, Quebec. Available online: http://www.acfas.ca/evenements/congres/programme/86/300/304/c (accessed 6 July 2018). Schumann, A.-K. (2014), ‘Hunting for a Linguistic Phantom: A Corpus-linguistic Study of Knowledge-rich Contexts’, Terminology, 20 (2): 198–224. Stefaniak, K. (2017), ‘Terminology Work in the European Commission: Ensuring High-quality Translation in a Multilingual Environment’, in T. Svoboda, Ł. Biel

Terminology Management

283

and K. Łoboda (eds), Quality Aspects in Institutional Translation, 109–21, Berlin: Language Science Press. Warburton, K. (2014a), ‘Narrowing the Gap between Termbases and Corpora in Commercial Environments’, PhD diss., City University of Hong Kong, Hong Kong. Warburton, K. (2014b), ‘Developing Lexical Resources for Controlled Authoring Purposes’, in Proceedings of the LREC Workshop on Controlled Natural Language Simplifying Language Use, 15–8. Available online at: http://www.lrec-conf.org/ proceedings/lrec2014/index.html (accessed 6 July 2018). Warburton, K. (2015), ‘Managing Terminology in Commercial Environments’, in H. J. Kockaert and F. Steurs (eds), Handbook of Terminology, Vol. 1, 360–92, Amsterdam: John Benjamins. Warburton, K. (2017), ‘Quality of Terminology Resources: A Pragmatic Approach’, Circuit, 133. Available online: http://www.circuitmagazine.org/dossier-133/qualityof-terminology-resources-a-pragmatic-approach (accessed 6 July 2018). Zanettin, F. (1998), ‘Bilingual Comparable Corpora and the Training of Translators’, Meta, 43(4): 616–30.

284

13

Translation technology – past, present and future Jaap van der Meer

1. Introduction Translation technologies are getting better and better at helping people lower language barriers, a development that has not gone unnoticed. These technologies have drawn the attention of innovators from both inside and outside the industry. Almost every week we hear of new solutions that promise to improve translations. And it does not appear that this trend will fade any time soon. On the contrary, the development of translation technology is on the rise. So how did we get to this point? And where will we go from here? This chapter explores these questions and provides an overview of current translation technology. The idea to have computers translate human language is as old as the information technology (IT) sector itself. But it was not until January 1954, when IBM announced it had developed a computer that translated a Russian text into English, that this new subsector of IT was born. The label ‘translation technology’ however, was not used until more recent years. In the early days, it was more common to speak about human language technology (HLT), which comprised natural language processing (NLP), computational linguistics (CL) and speech technology, rather than seeing it as a complete sector on its own. The hype about machine translation (MT) after the first public demonstration of an MT engine in New York in January 1954 triggered the start of research programmes around the world that lasted from the sixties well into the eighties. However, even though many companies tried to find ways to work with MT, little from that time has survived except SYSTRAN, the only company that navigated all the ups and downs in the seventy turbulent years of MT since its introduction (see A. Way, this volume, for an overview of MT).

286

The Bloomsbury Companion to Language Industry Studies

The first signs of more organization in translation services and pressure for technological developments became evident in the 1970s, with the rising needs for translation of user instructions, patents and legal documents by larger Western corporations. In the 1980s, translation services were traditionally provided by freelance professionals, but small agencies were becoming established and started to advertise their services in the Yellow Pages of telephone books. They translated documents into and out of a handful of European languages and delivered the translations on paper. The technology that was used in these first translation ‘shops’ consisted of electric typewriters (later replaced by the IBM magnetic card selectric typewriter) and telex machines (later replaced by fax machines). The age of personal computing was introduced with the launch of the IBM Personal Computer in 1981. With it came the first excitement of real productivity gains for translation practitioners, starting with word processors to help produce, edit and store their texts electronically. The 1980s also brought the first big promises of HLT in the form of automatic spell and grammar checkers and computer-based glossaries. The rapid growth of personal computing also provided a tremendous boost to the translation industry because of an increase in demand. The IT sector shifted its focus towards developing software for a wide range of applications, and that software required ‘localization’. As a result, new service companies specialized in localization were formed in Europe and the United States. Software was ‘king’, so it was soon time to start developing software for the translation business itself. INK in Amsterdam, one of the first specialized localization companies, released INK Text Tools (cf. Stoll 1988) and in 1989 TermTracer. An internal team at IBM developed IBM TM/2 to translate and localize their software and content in various language combinations.1 In 1994 TRADOS in Stuttgart introduced the Translator’s Workbench. These first dedicated translation tools were designed to take the repetitiveness out of the translator’s job – specialized terminology and identical sentences only needed to be translated once. In the world of academia, MT had largely gone undercover, but there was a revival in the 1990s. Inspired by the tremendous growth opportunities in translation and localization services, several new MT outfits with names like Tovna, Logos, Weidner and ALPS were presented at industry conferences. However, they were relatively short-lived. The quality of their systems was disappointing, and the public (i.e. the language service providers or LSPs) preferred translation technology where translators remained in control of the output, an approach that became known as computer-assisted translation (CAT).

Translation Technology – Past, Present and Future

287

The 1990s were a decade of hype around language and translation technology. Besides the rise and fall (again) of several MT technologies, we witnessed the sudden success of speech technology triumphed by a Flandersbased company called Lernout & Hauspie. Jo Lernout and Pol Hauspie sparked everyone’s imagination with visions of small devices that would translate the spoken word fully automatically from one language into another. It should be noted that this was a good fifteen years before the invention of speech technologies such as the Wearable Translator, ili and the Megaphonyaku (all from Japan). Around the turn of the century, we saw a wave of consolidation in what is now called the localization industry. Rather than working with a hundred or more small in-country single LSPs, large localization buyers like Microsoft more and more preferred working with three or four large vendors that could simultaneously handle large numbers of language pairs as well as complete the localization of new software releases (e.g. SimShip). A spree of acquisitions started by Lernout & Hauspie was continued by large companies like Lionbridge and SDL after the first-mentioned company’s bankruptcy in 2001. The overall growth in localization services and the trend to consolidate with a smaller number of vendors drove the need for more efficiency in the management of translation. The first challenge that presented itself was that of bringing translation memories (TMs) that were scattered in various places together in one big database. A number of companies (e.g. Idiom, Uniscape, GlobalSight and eTranslate) jumped at the opportunity, investing millions of dollars in the development of globalization management systems (GMSs). In addition to the consolidation of all translation assets, the GMSs promised to connect all functions – customers, project managers and translators – across multiple vendors, in a single highly automated workflow. For the GMSs to deliver on their promises, though, connectivity was crucial. Unfortunately, that was exactly where they often failed. In the mostly pre-web days, IT infrastructures depended on servers that seamlessly connected with other servers and with clients’ workstations. However, guaranteeing full compatibility between content management systems (CMSs), GMSs and the various TM tools already being used by a couple hundred thousand translators turned out to be extremely challenging. This was an opportunity waiting to be seized by Mark Lancaster, founder of SDL, who acquired TRADOS Studio, Idiom WorldServer and a whole suite of CMSs. Within a few years, he had repositioned SDL as a technology-driven service company and a ‘consolidator’ of translation technology.

288

The Bloomsbury Companion to Language Industry Studies

No matter how great the benefits of globalization (and localization) were, by the end of the ‘connectivity era’, translation in most large Western enterprises was still not an integral part of a business process but a function that existed in isolation. This changed around 2010, at the start of the ‘age of web services’, when top-down globalization made room for a democratized form of globalization. Under the old export mentality, a company would ‘conquer’ locales one by one with a static publisher-driven localization process. In the new ‘push and pull’ model, users play a big role in deciding which language(s), content types and what quality levels should be given priority. The internet has given us access to a wealth of knowledge. Even information written in a language that is not our own is now just a click away. These translations might not be perfect, but in most cases a bad translation is better than no translation at all. We also see that MT is making its strongest comeback ever. In 2016, machines translated more than 250 billion words each day. That is roughly 100 times more than the total production of the global human translation workforce. The era of web services is breaking with the connection complexity of the previous globalization decade by bringing everything down to a simple application program interface (API). Integrating MT, for instance, is just a matter of using an API from Google Translate, Microsoft Translator Hub, Yandex or another provider on a pay-as-you-go pricing model. This relative ease of translating (online) content has led to a big increase in demand. As a result, some localization departments that would previously only have translated product documentation are now also being asked to work on customer support databases, training and human resource portals, social media, search engine optimization and digital marketing content. What has become clear is that enterprises that hold on to their old localization models are at risk of being outcompeted by companies that effectively embrace and adopt innovations offered by both start-ups and established translation providers. For the first time in history, translation is being regarded as a strategic function that can help companies become truly global. Translation has also become something that people expect to have available. What can be referred to as the ‘convergence era’ really started on 28 April 2006, with the official launch of Google Translate. It was never Google’s intention to disrupt the translation sector, but it did. Until Google Translate, translation had always been a professional service, affordable to business clients only. This new free online service suddenly made real-time translation available for everyone.

Translation Technology – Past, Present and Future

289

Despite ten years of criticism for bad quality, Google Translate managed to develop an audience of hundreds of millions of users and inspire many start-ups. Convergence will continue to manifest itself in different ways. The combinations of business and consumer markets, publisher-driven and user-driven, or push and pull will be drivers of change. The combination of technologies may also lead to interesting new offerings, like speech technology and MT, leading to BabelFish-like applications. Convergence will affect business models too: freemium services will convert to subscription plans and hook users onto platforms. In the convergence era we can expect to see a rapid expansion of translation technology categories with, for instance, proxy-based translation platforms for easy translation of websites, dedicated tools for localization of mobile apps, tools for translation of subtitling, and community-translation platforms for translation of user-generated content. Choudhury and McConnell (2013) predict that the current spree of start-ups in translation technology is only the beginning of what we can expect when convergence comes to full maturity: translation available anytime, anyplace and everywhere. The next era – that of ‘singularity’ when advances in technology will lead to machines that are smarter than human beings – has been prefaced by an explosion of so-called intelligent translation solutions. The association of translation with artificial intelligence is obvious in the name of the latest generation MT: neural MT (NMT). Everyone seems to want to jump on this newest bandwagon. Translation technology companies like SYSTRAN, SDL and Lilt are advertising self-learning and adaptive MT, whereas Microsoft and Facebook are already deploying NMT engines for some of their translation activities. For many others though, it may not be until the early 2020s that they have fully implemented NMT. Apart from the futurologist Ray Kurzweil, who predicted that in 2029 MT will reach human translation (HT) quality levels,2 none of the academics I asked wanted to be tied to a prediction for when singularity will apply in the realm of translation technology. But they have no doubts that MT is continuing to improve. What TAUS predicts (van der Meer and Joscelyne 2017) is that the world will soon accept what we call fully automatic useful translation (FAUT) as the norm for translation. The explosion of intelligence inherent to singularity can be hindered though by restrictions on access to data or by the fact that some languages are under-resourced with respect to data. The future of AI in translation depends on access to data.

290

The Bloomsbury Companion to Language Industry Studies

2.  Focal points in research and development Because of the creative busyness and the momentum of the translation technology industry, new and innovative technologies and tools are being introduced almost every day. Not every new breakthrough is here to stay, though, and only time can tell which innovations are the ones that really matter. This section gives an overview of the main categories of translation technology, such as translation memory (TM) and translation management systems (TMS), that are currently on the market and have been or still are in the focus of the translation industry’s research and development efforts. They range from controlled authoring tools for source texts to quality assurance tools for target texts and all of the process management in between. Since these technologies are interconnected in many different ways, the categories are listed in alphabetical order for simplicity’s sake.

2.1.  App localization systems App localization systems are usually localization proxies with limited TMS capabilities. Their main feature consists of dedicated plugins (connectors) to connect to the original integrated development environments (IDE) or the concurrent versions system (CVS; that is, the software development system that keeps track of all work and changes, and allows development teams to collaborate). Once integrated, the connectors automatically identify the translatable content within an app, upload it into the system and constantly synchronize it with the app itself. Some app localization systems also prepare a localized app for release and submit it to the distribution store. The most intriguing components in any app localization system are emulators, which provide pseudo-localization features to allow developers and localizers to deal with the intricacies of the limited space available on mobile device screens. Many app localization systems work in combination with external localization management platforms in order to provide functionalities like TM leveraging, progress monitoring and workflow management.

2.2.  Audio–video captioning Captioning tools are specialized software programs that digitally manipulate videos by making each frame accessible. Text is usually added to a sequence by marking the two ends between which a subtitle should appear. These markers are

Translation Technology – Past, Present and Future

291

usually based on a timecode, the standard numeric sequence for time reference for editing, synchronization and identification. There is a large number of readily available applications for video subtitling that help subtitlers perform all pertinent tasks in front of a single screen. In addition to deciding the exact in- and out-time of each subtitle, they can monitor the reading speed and length of subtitles, and choose the positioning and colour of the subtitle text. Although some is still rather expensive, professional subtitling software has come within reach of many translators, also thanks to the emergence of many free and equally reliable tools. The spread of open-source and free subtitling software together with the sprawling of the pervasive and unstoppable web has given rise to fans subtitling foreign films and television programmes before they are commercially available in the subtitled versions (see Orrego-Carmona and Lee 2017). This phenomenon of ‘fansubbing’ has further spurred the development of applications and platforms built for this specific purpose. These subtitling platforms are very easy to use and allow participants to concentrate only on the linguistic task. An additional thrust has come from internet giants exploiting speech recognition technology for automatic captioning of user-generated videos. However, in this instance the quality and accuracy varies according to language, voice clarity and the amount of background noise.

2.3.  Community-translation platforms User communities can represent an important factor in a translation or localization effort because of their valuable understanding of specific languages and cultures. Companies have learnt that engaging user communities and exploiting the huge amounts of information, experience and energy they bring to the task is essential for keeping their own products suitable and, at the same time, to win customer loyalty. Most translators active in user communities are non-professionals. This does not mean they cannot produce excellent translations. Using a platform that enables them to run a collaborative effort is essential, though. A community-translation platform is a way to integrate open knowledge into a collaborative network following a social media model (cf. Desjardins 2017). The fundamental ingredients of a community-translation platform are a role-based organizational model, a technological infrastructure for project management and translation, communication functionalities and, possibly, a voting mechanism. From this perspective, community-translation platforms are

292

The Bloomsbury Companion to Language Industry Studies

basically TMSs with an online-friendly editor and collaboration features (chat, file sharing, forum, etc.) to invite and hire translators or delegate to contractors. Community translation does not necessarily mean crowdsourcing translation, whereas a crowdsourcing translation or localization project definitely requires a community-translation platform.

2.4.  Controlled authoring tools In recent years, the increasing demand for automation and, more specifically, for MT has led to an increased use of controlled language and controlled authoring as a means to improve content and manage translation costs (see Sin-wai 2017 for an overview). Controlled authoring tools assist writers to check compliance of documents to predefined writing rules (style, grammar, punctuation, etc.) and approved terminology. The tools parse texts and bring style, grammar and terminology deviations to the writer’s attention. Some tools allow for customization, that is, users can specify which writing rules and terminology restrictions should be applied. The beneficial effects of a standardized way of writing supported by controlled authoring tools can be identified in improved safety and customer service from clearer communication, less recalls and liability claims, enhanced content management and considerable cost savings related to reduced time-to-market, lower word count, shorter translation turnaround time and, finally, reduced translation costs (cf. Muegge 2009; Ó Broin 2009).

2.5.  Globalization management systems A globalization management system (GMS) is typically provided through an API to allow subscribers to customize the system’s appearance (the user interface) and manage the workflow. This includes tasks such as assigning roles and deadlines, monitoring costs, workload and quality parameters within a project, and managing routing and distribution of content, while maintaining full control over every single step of the translation automation process, from file analysis to billing. Some systems also have versioning control functions to detect changes in the content to be translated and to trigger dispatching.

2.6.  Localization project management Localization project management systems are translation industry verticals (i.e. specific segments) for business-management software. They usually consist of a suite of integrated applications to collect, store, manage and interpret data

Translation Technology – Past, Present and Future

293

coming from different business activities (from planning and purchasing to service delivery). With the help of a localization management system, users can plan, develop resource estimates, monitor scheduling, allocate resources, control costs and in some cases manage quality and run administration tasks. Localization management systems were initially developed for software localization following the client–server model, but nowadays they are extending to web-based services that enable customers to upload their assets (i.e. digital files) and monitor localization progress (see Section 2.9).

2.7.  Machine translation platforms MT may with good reason be considered the grandfather of all translation automation technologies. From its early days, MT has been thoroughly investigated and subject to intensive research and development efforts. With the introduction of statistical engines, MT made dramatic progress in terms of speed and quality, and it is now used in the context of professional translation. A good example is MT engines integrated within TM tools offering ‘suggestions’ for what is now called ‘interactive post-editing’ (also see Carl and Planas, this volume). The boom in free online services has turned MT into a viable mainstream technology. It has also triggered a paradigm shift in the demand for translation: today MT is widely leveraged for content that was once doomed to oblivion because of the high cost of human translation. The so-called zero-translation option (i.e. not providing translated content) is now simply a thing of the past for many companies (for an overview of MT, refer to A. Way, this volume).

2.8.  Post-editing tools The improvements in MT quality and the large availability of platforms have led to an unprecedented demand for translation, making post-editing a common practice that makes it possible to save time and money. Post-editing is also an important technique used to track and understand issues in the performance of MT engines and to make the necessary adjustments to improve them. TM tools can be used for post-editing and at the moment represent the best solution to facilitating the integration of MT into the typical professional translator’s workflow. They do have three main limitations though: cost (most are proprietary tools only available as part of a major product distribution), limited flexibility and lack of detailed statistics about post-editing jobs.

294

The Bloomsbury Companion to Language Industry Studies

Dedicated post-editing tools have therefore been developed over the past few years that follow a user-centric design to reduce the steepness of the learning curve and increase user efficiency. These tools follow the same approach as professional translation environment tools: they segment texts in distinct chunks and display the source text beside or above the corresponding MT output, which can be edited in a separate window. Unlike professional TM tools, some postediting tools can handle only XML-based formats and typically a bilingual format like TMX. There are others, though, such as MateCat and Lilt, that can handle up to seventy different file formats (for an overview of post-editing, refer to Guerberof Arenas, this volume).

2.9.  Proxy-based localization management Proxy-based localization management platforms exploit the relay principle of computer networks known as proxying. They are built around proxy servers that receive traffic from global audiences who are routed – through these proxy servers – to source websites. As visitors browse a proxied site, requests go back to the source site where resources are rendered. The original-language content in the response is replaced by translated content as it passes back through the proxy. Localization proxy servers are typically reverse proxies that appear to clients as ordinary servers: responses are returned as if they came directly from the originating servers. Building a global website and having it properly localized in all the desired locales takes planning, skill and investment, which is hardly affordable, especially when reaching a global audience is a must. Proxy-based localization technology offers a solution to internationalization issues requiring heavy re-coding to support multiple languages, especially for ecommerce websites. The management component is made up of an integrated or built-in translation management system, where all textual content of a website is extracted by the localization proxy servers and channelled through for translation. The localization management component is meant to put project management functionalities into the hands of localization buyers. When users request the content, localization proxy servers provide the translation, saving the owner of the originating platform the hassle of localizing the content and storing all the localized versions – not to mention the nuisance of adding alternate languages down the road. When properly configured basically as reverse proxy servers, localization proxy servers can hide the existence and characteristics of the origin servers, thus offering an additional security layer.

Translation Technology – Past, Present and Future

295

They can also distribute the load, cache and optimize content, and rewrite URLs to match geographical or localization criteria.

2.10.  Quality assurance tools Quality assurance (QA) tools are software tools used to compare source and target segments of bilingual texts in order to identify and resolve formal errors in bilingual translation files and TMs. They can also detect formatting, consistency, grammar, punctuation, capitalization and spelling errors in the target texts; target segments that are identical to source segments; untranslated segments; and omissions and incorrect non-translatables in the target text. Moreover, these tools can check compliance with project glossaries. Other typical functionalities include length comparison, repeated words, double spaces, unit conversion verification, quotation marks, brackets and tags. In most tools, checking routines can be created using ‘regular expressions’ (i.e. specific standard syntax that represents patterns of text to be checked). QA tools have a number of intrinsic limitations, the most important being their high vulnerability to false positives. This is generally due to differing grammatical rules of the source and target languages. In addition, QA tools function based on the assumption that the source text is correct, while a mistake present in the source text could have been rectified in translation. Although they cannot replace human editors and proofreaders – and despite their limitations – QA tools can help save time, especially with large volumes, as the typical checks they perform are too time-consuming, too annoying – or simply impossible – to be done manually. QA tools are an irreplaceable aid for translators, no matter how experienced they might be. All modern translation environment tools include a variety of QA functions. They are also invaluable in preparing files for post-editing or preparing data for statistical MT engines. In some cases, QA tools can even be used to assess the quality of a translation sample.

2.11.  Speech-to-speech translation The goal of speech-to-speech translation (S2S) is to enable instant oral crosslingual communication between people who do not share a common language. S2S translation is the nearest thing to the universal translator devices often depicted in many science fiction books and movies. Since 2013, technology

296

The Bloomsbury Companion to Language Industry Studies

giants have brought S2S translation to the real world with popular applications, mainly intended for mobile devices, like Google’s Voice Translator and Microsoft’s Skype Translator. This has been possible thanks to the combination of existing technologies that have become more accurate (i.e. speech recognition, MT and speech synthesis). A typical S2S translation system operates via a three-stage process: an automatic speech recognition (ASR) unit converts spoken language into text; then an MT engine produces a translation of this text in the target language; finally, a text-to-speech (TTS) unit converts the resulting text into speech. The speaker of a source language speaks into a microphone and the ASR unit recognizes the utterance, compares the input with a phonological model – consisting of a corpus of speech data – and converts it into a string of words, using a dictionary and grammar of the source language based on a corpus of text in the source language. Synthesized speech is created by concatenating pieces of recorded speech that are stored in a database or on a model of the vocal tract and other human voice characteristics to create a completely synthetic voice output. In adverse environments, noise and distortion are both problematic for speech recognition and hence for MT.

2.12.  Terminology management tools and repositories Terminology tools are used to compile, mine, register, store, organize, search, translate and share terms and terminological databases (termbases) on a customer, project or asset level in order to ensure that the correct terms are leveraged consistently according to a set of rules. A basic terminology tool is a searchable database that contains a list of approved terms, with rules and annotations regarding their usage. Most terminology tools are integrated into TM tools and typically used in conjunction with TMs to pre-translate recurring words and phrases, and to avoid translating items that should be left as is (e.g. non-translatables such as brand names). Terminology tools usually also consist of a mining tool to extract relevant terms from a large and structured set of texts (corpus) using a statistical (frequency-based) or linguistic (morpho-syntactic) approach, sometimes in conjunction with an existing termbase to detect new occurrences. These tools prompt the user with a list of plausible terms (candidates) to choose from. Term extraction can, for example, be applied to parallel corpora to create bilingual glossaries (see Bowker, this volume, for an in-depth treatment of terminology extraction and management).

Translation Technology – Past, Present and Future

297

Termbases, which are basically databases of terminological data, encourage consistent vocabulary and style as well as help prevent inconsistent translations of frequently recurring phrases. With the convenience presented by the internet and related technologies, larger organizations are increasingly likely to consolidate their termbases and migrate them to central storage locations from which users may search the databases and retrieve any terminological information they need. The publicly accessible termbases generally belong to international organizations, contain multilingual terminology data and are free of charge, requiring only a registration. The organizations running these databases typically provide a simplified search user interface, and a staff-only access door for the manipulation of data stored in the database.

2.13.  Translation apps As of June 2018, there were roughly 4.6 billion mobile phone users in the world3 (61 per cent of the global population), and an estimated 55 per cent of the world’s population had access to the internet. This shows the apparent reality of a global population on the move, leading to the ever-growing need of multifarious communication. This obviously entails an increasing demand for translations to help mobile users around the world find their way through the maze of messages and landmarks in unfamiliar languages. Since its launch in 2007, Google has tried to meet this demand by bringing MT to the masses, making it free, quick and easy, and therefore significantly contributing to the demolition of the language barriers. A decade later, Google Translate counted 500 million daily users, and other providers such as Microsoft, Baidu and Yandex are available as well as numerous corporate and governmental MT services. App stores currently offer hundreds of translation-related apps to translate signs and print-outs using barcodes or QR-codes, spoken text transmitted through megaphones or a mobile device microphone, or messages within the most common messaging apps. Most of them rely on Google’s services, thus requiring an internet connection, but they do strive to distinguish themselves with some useful (or just cute) features. There are also wearable translation devices shaped as pendants or small earpieces, but it is still too early to be able to report any empirical evaluations on their effectiveness.

2.14.  Translation management systems Translation management systems (TMSs) are the natural evolution of translation servers as they enable translation businesses to monitor projects and control

298

The Bloomsbury Companion to Language Industry Studies

every single task according to a predefined workflow. The implementation of a TMS contributes to eliminating all manual tasks, thus cutting overhead costs and increasing project execution speed. As outlined below, many stateof-the-art desktop TM tools now provide some basic translation management functions, but most TMSs are web-based and rendered with the SaaS (software as a service) licensing model. The SaaS pricing model helps avoid wasted time, money and product limitations on traditional business applications, achieving a more efficient, centrally controlled business and allowing access to all applications from anywhere (as long as an internet connection is available) without having to worry about servers, office space, power and so on. Capital expenditure is reduced because a translation business does not have to purchase servers or full copies of software. Because no local installation is required, software can be deployed more quickly while human resources are free to focus on core business activities. Captive translation management systems can be either implementations of commercial SaaS systems for the exclusive use of a company’s clients (as part of a service agreement) or proprietary platforms specifically developed by a translation business to enhance process automation. In both cases, upgrades and improvements are made by the technology owner and can be limited in scope. Customers looking for higher flexibility can pursue customized solutions that are not tied to a single vendor. Upgrades and improvements are made separately and in different stages by the various providers. These solutions require a much higher degree of integration, longer time for design and implementation as well as more highly specialized dedicated staff than non-customized solutions. There are a few drawbacks, however, from security and data integrity issues to the increased revenue cost of paying for the use of the services and from the dependence on internet connectivity to the lower flexibility and the higher customization costs. The SaaS model offers a real-time collaborative translation platform, with parallelized processes instead of the typical serial approach. Licensing is generally done via subscription, with different payment plans for linguists and companies of different sizes.

2.15.  Translation memory tools TM is a repository of previously created human translations that are to be reused. Typically, this happens by the system searching for matches between sourcetarget language pairs stored in the TM and presenting them as translation

Translation Technology – Past, Present and Future

299

candidates. In this respect, a TM is expected to boost productivity by increasing efficiency because translators may or even must leverage – partially or totally – previously translated texts. TMs have been around since the late 1980s but they became commercially viable only in the late 1990s, when the ideas behind them – reusability and leveraging – gave way to a new business logic based on discounts and higher productivity. Reusing and leveraging existing translations for documents and projects with a lot of repetition can yield significant savings in terms of costs and time. Nowadays, most TM tools can support and facilitate the main steps of the translation process: from terminology mining and management for consistency and accuracy to multiple-format conversion for code, formatting and layout protection; from word count to job statistics; QA to MT integration. The majority of TM tools that are currently available can provide project management functions that vary from basic to sophisticated to help translation providers control overhead costs and budget planning. TMs can be a dedicated system or a client–server model, which is an application structure to partition workloads between a service provider (server) and clients over a communication network. The server hosts and runs one or more applications that share the resources with the clients. This model has been quite popular among translation businesses and departments looking for a way to centralize their resources. The typical implementation consists of a server – usually hosted on the customer’s premises – which, in turn, interacts with client software installed on employees’ and translators’ machines. Despite the many advantages offered by a centralized architecture, there has always been a major issue: interoperability (i.e. the capability of a product or system to work with other products or systems). The lack of fully applied standards in translation is the main reason for the need for higher interoperability. Translation software generally uses exchange formats (TMX, CSV, etc.) to try to ensure interoperability. Unfortunately, this often leads to a loss of information, reducing interoperability to mere compatibility. Users are increasingly hostile to being locked-in with any software vendor, and are constantly searching for ways to reduce their total cost of ownership (TCO) while coping with increasing demands for higher productivity. As a solution to the high IT TCO costs associated with a client–server solution, some translation technology providers have developed cloud-based interfaces and/or on-demand licensing systems; others have retooled their client–server solutions as cloud-based products. The popularity of cloud-based translation is due to the need on the translation vendors’ side to maintain full control over

300

The Bloomsbury Companion to Language Industry Studies

their TMs, terminology databases, vendor bases and project data, as well as the stringent security requirements sometimes imposed by clients. The most interesting features of web- or cloud-based TM tools are the inherent support for cross-platform compatibility, the ability to support users on mobile devices, and the licensing model (per user or volume-based), which enables customers to upgrade and downgrade as needed. Quick updates of web applications are also possible without the necessity of distributing and installing new software.

3.  Informing research through industry Only ten years ago, few LSPs and translators would have even considered using MT in a translation job. Today, every day, more than 99 per cent of all translations in the world may be being produced by machines. Does this mean the end for the translation industry and the translation profession? Not yet. Despite the amazing pace of technological development, artificial intelligence (AI) scientists unanimously agree that true mastery of language is going to remain out of reach for software, at least in the near future.4 The trends that are discussed below have been underway for some time now and have reached the plateau of productivity. However, these advances in technology remain to be critically evaluated with respect to their potential effect on translation as a profession and on the languages services industry in general.

3.1.  The cloud Today, speed is more vital than ever. Companies are churning out their products at an ever-increasing pace in an effort to achieve a competitive advantage. Translation is crucial to reaching new markets and to meeting the expectations of customers. Scalability is key, and that is exactly what is offered by cloud-based applications and cloud services. Cloud-based solutions offer on-demand access to shared data and other resources. Applications and services can be deployed and provisioned rapidly and conveniently, with a minimum of management effort. Cloud computing allows companies to avoid upfront infrastructure costs, while getting their applications up and running faster. Companies can also adjust resources to meet business demand, priced on a typical pay-as-you-go basis – similar to prepaid mobile phone tariff plans. Because of the benefits, an increasing number of businesses are migrating to the cloud. To meet their needs, new cloud platforms are being built, producing

Translation Technology – Past, Present and Future

301

high value and vertically focused solutions. It has been estimated that almost 90 per cent of new software applications are going to be deployed in the cloud.5 The cheap cost of services, scalability, accessibility and availability make this technology interesting for clients accustomed to SaaS, and even for the very traditional and conservative users in the translation industry. Still, many translation software providers have been reluctant to migrate their applications to the cloud, leaving room to newcomers and start-ups outside the boundaries of the traditional translation software markets. This resistance to cloud translation platforms, or at least sluggishness in spreading, is presumably due to their inherent raison d’être: remote administration and resource control. However, there are issues with respect to delays in responsiveness (i.e. in retrieving segments for translation or the corresponding suggestions) and the need to be constantly connected to the internet. Industry players are paying more attention to security and privacy issues than to efficiency, convenience and cost-effectiveness. Data ownership and integrity have always been a major concern in the industry. In that regard, cloud computing raises serious issues because the service provider can access customer data and, whether accidentally or deliberately, can alter or even delete it. In addition, many governments require cloud technology providers to share customer data, even without a warrant – a clause which is often included in agreements. Nevertheless, the benefits of cloud solutions seem obvious. Sharing TMs means that all members in a team can almost instantly use them and reuse updates. Maintenance is centralized and managed. Many tasks can be simplified and automated from a central point. This is probably why TMS providers were the first to migrate their products to the cloud. Unfortunately, only a few of these products are mobile-ready, so TMS providers will probably soon be releasing mobile apps and mobile-ready versions of their web applications to meet current trends in procurement and sales. The most important applications of cloud computing on the translation side can be found in the MT sector. MT was traditionally a professional service rendered with high-tech systems sold by sales engineers. These were very complex systems requiring expensive infrastructural investments and long lead times as well as highly skilled staff to implement, configure and run them. As such, MT has long been out of reach for most organizations. But the rise of cloud computing has made even state-of-the-art and constantly improving MT services accessible to the general public, providing high-quality, user-specific translations at a very low cost.

302

The Bloomsbury Companion to Language Industry Studies

This trend could change with the rise of so-called adaptive MT (cf. Carl and Planas, this volume). If pre-translation could be run through a combination of TMs and suggestions from MT engines, then the need for post-editing could rapidly become a thing of the past. MT could return to being a back-office task for highly specialized people, because engines will need to be tuned and adjusted dynamically in real time, making all subsequent MT predictions more intelligent, informed and accurate. The computing power available in the cloud also allows for powerful analytic tools that can help anticipate evidence-based decisions on various items, like language pairs, domains, engines and jobs for which MT makes true business sense. The need for research is clear.

3.2.  Datafication of translation Data entered the field of MT in the late 1980s and early 1990s, when researchers at IBM’s Thomas J. Watson Research Center reported successes with their statistical approach to MT. Until that time, MT had operated more or less the same way as generative linguists had been assuming human translators worked, with grammars, dictionaries and transfer rules as the main tools. The syntactic and rule-based MT engines appealed more to the imagination of linguistically trained translators, who recognized the former’s limitations, while the purely data-driven MT engines with probabilistic models turned translation technology into more of a threat to many translators. This might have been not only because the quality of the output improved as more data were fed into the engines but also because so many people had trouble conceiving what really was happening inside those machines. The Google researchers who also adopted the statistical approach to MT published an article under the meaningful title ‘The Unreasonable Effectiveness of Data’ (Halevy, Norvig and Perei 2009). Sometimes even the statisticians themselves were wondering why metrics went up or down, but one thing seemed to be consistently true: the more data the better. Around that same time, in 2008, TAUS and its members founded the Data Cloud. The objective of this data-sharing platform was to give more companies and organizations access to good-quality translation data needed to train and improve their MT engines. The TAUS Data Cloud is probably now the world’s largest repository of translation data. As in other businesses and industries, data are marching into the translation sector to teach machines to take decisions and gradually take over human tasks,

Translation Technology – Past, Present and Future

303

even on the management side. A good example of this is choosing the best translator for a job. In a classic translation agency this is typically done by a project manager. In more modern companies, however, algorithms have taken over this task. Pressing questions that need to be addressed now are, how do we aggregate these data, and how do we make sense out of them in order to optimize and automate processes? The trend towards datafication of translation is leading to an increased focus on visualization, benchmarking and machine learning. More and more providers and buyers of translation will visualize data on dashboards to report on projects, benchmark their translation resources and technologies and decipher trends (cf. Lommel 2018). The more advanced translation platforms use data to develop algorithms and code them into their products to automate tasks, such as finding resources, matching content types with the right tools and processes and predicting quality. Translation companies will be looking out for data specialists who can help mine data and develop the algorithms that automate and optimize management processes. But what if different data tell us different things? Language service and technology providers typically differentiate themselves in many ways to highlight their value to their customers. They use different terminology, different metrics, different definitions as well as different quality levels, matching algorithms and segmentations. For data to make sense and for machines to learn and be useful across the industry, it is important for the stakeholders to work together towards harmonization. An interesting initiative in this respect has already started with the creation of the TAUS DQF Enterprise User Group in which companies like Microsoft, Lionbridge, eBay, Cisco, Welocalize, Intel, Oracle and Alpha CRC work together to agree on metrics for quality evaluation, quality levels, nomenclature, categories and counting. The objective of this collaboration is to get comparable data sets to enable industry benchmarking, which should be subjected to evaluation by independent researchers. This will lead to business intelligence, which in turn will help us develop metrics and algorithms that can be used to teach machines to work for us across platforms and vendors.

4.  Informing industry through research Some trends that are discussed below have just left the restricted circles of academic research and are entering the commercial arena (e.g. neural machine

304

The Bloomsbury Companion to Language Industry Studies

translation or NMT), while those discussed in the previous sections have been around for a while but have not yet necessarily been critically evaluated. Either way, they are all expected to play a major role in the evolution of the translation industry. Datafication and quality could seem strange bedfellows as both are long-debated topics in the industry, but advances in technology are opening up new possibilities for their exploitation and assessment.

4.1.  Neural MT and machine learning There is an ongoing revolution in the machine learning world that is taking the performance of many (if not all) machine learning applications to a new level. The core element of this change is the deep learning (DL) technology, inspired by human knowledge about the biological brain (cf. Nielsen 2015). DL has already led to great improvements in several data-centred fields, such as speech recognition, computer vision, user behaviour prediction and nonlinear classification. The last significant breakthrough in the technology of SMT was in 2005, when Chiang published a paper on hierarchical translation models that allowed signficant improvements to the quality of MT between distant languages. Now we are standing on the verge of an even more exciting moment in the MT history: DL is taking MT towards much higher accuracy and is finally bringing human-like fluency to the translation process (cf. Forcada 2017). We have come to a point when high-performing DL has come to the world of commercial translation automation. At the end of 2014, Google presented an elegant and simple approach based on the use of recurrent artificial neural networks (ANNs) with long short-term memory. The results were more than encouraging: the NMT system not only outperformed state-of-the-art SMT systems but also translated longer sentences with higher accuracy. Facebook, Microsoft and Bing are in the process of implementing or have already implemented NMT. Traditional MT providers, like SYSTRAN, are launching NMT systems to boost the quality of their MT services; DL models bring a new perspective to MT and open the way to new applications of indirect NMT integration. At the end of the day, DL has the potential to become a technology that will bring human-like intelligence to MT. It is currently on the verge of breaking the quality barrier, making MT smarter. The improvements offered by DL MT mean that automatic translation systems will commit fewer errors

Translation Technology – Past, Present and Future

305

and will generate higher-quality output, which is the decisive factor for both professional translation services users and end users. The positive impact of the application of deep networks can already be seen in some general-purpose and customized MT systems, although its potential is still far from being realized.

4.2. Quality The topic of quality has been debated since translation took its place in the world. Some people believe that quality depends on processes and must be moulded into project deliverables; others believe that quality can only be evaluated downstream; and still others believe that quality is what happens when requirements are met and customers keep buying or using a product (cf. Koby et al. 2014). Standards and technology play a major role in meeting quality requirements. So far, translation quality standards have not improved the image projected by the translation industry and its players, while automatic evaluation of translation quality is yet to become a reality despite the fast-paced technology developments that are currently taking place. With the exponential escalation in number of language pairs and volumes to meet the booming demand for multilingual content to support businesses’ international growth, quality has become a major concern for customers. Because of the acknowledged information asymmetry, however, they do not have the skills or tools to rate the translation quality they are buying. Many large LSPs are investing heavily in the development of quality-related decision-support tools to provide translation project insights – from document classifiers to style scorers – most of which will greatly benefit from better and cheaper machine learning platforms. On a different and larger scale, a wide consensus seems to have been reached on a few topics that have been debated for quite some time in the translation community. These include: 1. quality requirements vary with different content types; 2. processes should be automated and streamlined as much as possible; and 3. cognitive load on translators should be optimized to increase productivity. In the last few years, the one-size-fits-all approach to quality has shown its limits and, with the widespread availability of many free business tools, our industry has been trying to use technology to gradually shift to a mature

306

The Bloomsbury Companion to Language Industry Studies

stage. Translation companies have started running surveys and doing research to monitor customer experience. And although most of this research and the surveys are only locally relevant (e.g. van der Meer et al. 2017), they still contribute to making the localization process more efficient. With the extended application of TMs and MT, in addition to new analysis tools, translation quality can be achieved at scale, especially within those companies reaching out to international markets. However, customers in all industries know by now that translating content is not enough to win international markets and build global brands. They expect their language partners to be able to get the most from the latest digital marketing techniques for a better understanding of locales and demographics. The objective is to deliver tailored experiences that drive increased usage through increased local relevancy. The most commonly asked question about translation quality is how it can be measured. Due to the human nature of translation work, the intrinsic quality of a translation is often settled by personal taste. Both theorists and professionals agree that there is no single objective way to measure quality and that different models assess different elements (cf. Moorkens et al. 2018). Up to now, the most common way to assess translation quality of various MT systems has been to measure the number and magnitude of errors through a parallel analysis of input and output. Things are gradually changing, though. Efforts are being made to establish a common model or definition of quality and to translate this into a set of parameters to measure each element. Buyers want to know what they buy and what it is worth. Measuring should be used to reduce buyers’ uncertainty by providing them with factual data and help them assess the translation effort, budget it and evaluate the product they will eventually receive. The TAUS Quality Dashboard is a perfect example of such a model, which is based on the harmonized DQF-MQM metrics (TAUS 2016). For the last few years, buyers have been presented with different measurements, the first of which is still price. Recently, other items have been added to the list: productivity, customer satisfaction and vendor capability, as well as other service level indicators. Efforts have been made, too, to build quality indexes. In the last two years we have seen growing discussions on key performance indicators (KPIs) to be leveraged in order to change the perception of translation from cost centre to revenue enabler. The next stage of this modernization would be to measure content for assessment purposes in order to allow buyers to have reasonable expectations of outcomes (i.e. to predict quality) and further reduce uncertainty.

Translation Technology – Past, Present and Future

307

4.3.  New developments On 29 January 2016, in an article for the Wall Street Journal, Alec Ross, former senior adviser for innovation to the US secretary of state Hillary Clinton, wrote that ‘the language barrier is about to fall’ and that ‘within 10 years, earpieces will whisper nearly simultaneous translations – and help knit the world closer together’.6 Alec Ross’s article came seven years after President Barack Obama’s commitment to ‘automatic, highly accurate and real-time translation’ as expressed in the first ‘Strategy for American Innovation’ paper. A more recent version of this paper – released in 2015 after the State of the Union address – contains another important statement on America’s innovators who are ‘eliminating barriers to global commerce and collaboration with real-time language translation’.7 Such tools would be heirs of publicly funded – mainly defence – projects and would be based on fairly mature technologies such as ASR and speech synthesis from text (TTS). However, the core of any speech-to-speech translation (S2ST) tool is the MT engine, and many current S2ST tools are still using generalpurpose engines, which have become their Achilles heel because of the intrinsic limitations. Major improvements are expected to come from NMT: S2ST technology has very high potential, and S2ST systems are going to be one of the most sought-after solutions for crashing through the language barrier. They will soon be in demand globally, especially in sectors like tourism, education and healthcare, mostly to increase productivity. Companies such as Google and Microsoft are investing heavily in research and development8 and have already released the first applications (e.g. Hassan et al. 2018). They are still in the early stages, though, and no major breakthrough is expected immediately. However, contrary to what happened with MT, it is doubtful that S2ST will ever be an integral part of the translation industry. S2ST will probably remain a conversation technology, a sort of natural extension of digital assistants. Linguistic research might be involved in the support of developments with training data and machine learning to solve small-language issues.

5.  Concluding remarks According to futurologist Ray Kurzweil, the era of singularity in the translation industry is more than a decade away, so it will be a long time before we will have a device at our disposal that allows instantaneous communication with people in the farthest corners of the world.

308

The Bloomsbury Companion to Language Industry Studies

There is still much room for improvement in the realm of MT with AI techniques being applied. We are eagerly awaiting the singularity era – when translation will become inexpensive, prompt and easy. The role of professional translators will not vanish, but it will evolve – again – through technology.

Notes 1 The core of IBM TM/2 was made available to the open-source community in 2010 (see http://www.opentm2.org/translators/). 2 https://futurism.com/kurzweil-claims-that-the-singularity-will-happen-by-2045 3 https://www.statista.com/statistics/274774/forecast-of-mobile-phone-usersworldwide/ 4 https://www.technologyreview.com/s/602094/ais-language-problem/ 5 https://www.idc.com/getdoc.jsp?containerId=prUS43185317 6 https://www.wsj.com/articles/the-language-barrier-is-about-to-fall-1454077968 7 https://obamawhitehouse.archives.gov/sites/default/files/strategy_for_american_ innovation_october_2015.pdf 8 https://slator.com/technology/google-microsoft-baidu-step-up-nmt-research-inrecord-august-as-bleu-takes-heat/

References Chiang, D. (2005), ‘A Hierarchical Phrase-based Model for Statistical Machine Translation’, in ACL-2005: 43rd Annual meeting of the Association for Computational Linguistics, 263−70, Ann Arbor. Choudhury, R. and B. McConnell (2013), Translation Technology Landscape Report, De Rijp: TAUS BV. Desjardins, R. (2017), Translation and Social Media: In Theory, in Training and in Professional Practice, London: Palgrave Macmillan. Forcada, M. L. (2017), ‘Making Sense of Neural Machine Translation’, Translation Spaces, 6 (2): 291–309. Halevy, A., P. Norvig and F. Perei (2009), ‘The Unreasonable Effectiveness of Data’, IEEE, 24 (2): 8–12. Hassan, H., A. Aue, C. Chen, V. Chowdhary, J. Clark, C. Federmann, X. Huang, M. Junczys-Dowmunt, W. Lewis, M. Li, S. Liu, T.Y. Liu, R. Luo, A. Menezes, T. Qin, F. Seide, X. Tan, F. Tian, L. Wu, S. Wu, Y. Xia, D. Zhang, Z. Zhang and M. Zhou (2018), ‘Achieving Human Parity on Automatic Chinese to English News Translation’, Computing Research Repository arXiv:1803.05567.

Translation Technology – Past, Present and Future

309

Koby, G. S., P. Fields, D. Hague, A. Lommel and A. Melby (2014), ‘Defining Translation Quality’, Revista Tradumàtica: tecnologies de la traducció, 12: 413–20. Lommel, A. (2018), ‘Metrics for Translation Quality Assessment: A Case for Standardising Error Typologies’, in J. Moorkens, S. Castilho, F. Gaspari and S. Doherty (eds), Translation Quality Assessment. From Principles to Practice, 109–27, Cham: Springer. Moorkens, J., S. Castilho, F. Gaspari and S. Doherty, eds (2018), Translation Quality Assessment: From Principles to Practice. Cham: Springer. Muegge, U. (2009), ‘Controlled Language: Does My Company Need It?’, tcworld, 2: 16–19. Nielsen, M. A. (2015), Neural Networks and Deep Learning, Creative Commons: Determination Press. Ó Broin, U. (2009), ‘Controlled Authoring to Improve Localization’, MultiLingual, October/November 2009: 12–14. Orrego-Carmona, D. and Y. Lee (2017), Non-professional Subtitling, Newcastle upon Tyne: Cambridge Scholars Publishing. Sin-wai, C. (2017), The Future of Translation Technology. Towards a World without Babel, London: Routledge. Stoll, C.-H. (1988), ‘Translation Tools on PC’, in C. Picken (ed.), Translating and the Computer 9: Potential and Practice, Proceedings of a conference jointly sponsored by Aslib, the Association for Information Management, the Aslib Technical Translation Group, the Institute of Translation and Interpreting, 12–13 November 1987, 11–26, London: Aslib. TAUS (2016), TAUS Quality Dashboard. From Quality Evaluation to Business Intelligence, De Rijp: TAUS. van der Meer, J., A. Görög, D. Dzeguze and D. Koot (2017), Measuring Translation Quality. From Translation Quality Evaluation to Business Intelligence, De Rijp: TAUS. van der Meer, J. and A. Joscelyne (2017), Nunc est Tempus. Redesign Your Translation Business, Now! De Rijp: TAUS BV.

310

14

Machine translation: Where are we at today? Andy Way

1. Introduction Machine translation (MT) usage today is staggering. Consider Google Translate,1 which as of May 2016 was translating an average of 143 billion words a day – 20 words/day for every person on the planet, just for a single (albeit the largest) MT service provider – across 100 language combinations, a doubling in translation volume in just 4 years. This number alone already means that MT quality is ‘good enough’ for a range of use cases, so continuing to question the utility of MT is moot. The aim of this chapter is to explain to translation/interpreting students and academics, professional translators and other industry stakeholders how MT works today, and how the field has altered in the last thirty years. I describe the underlying reasons why MT engine-building changed from being underpinned by grammatical rules to the situation today where it is almost entirely datadriven; while for some time most of the research in academia was corpus-based, the leading MT engines in industry remained almost wholly rule-based, but this dichotomy has now largely disappeared, principally due to the introduction of the Moses Statistical MT (SMT) toolkit (Koehn et al. 2007), and the subsequent rise of neural MT (NMT). While it was already the case that the dominant paradigm was SMT, a performance ceiling was reached relatively quickly, such that for the past ten years or so, MT system developers have been ‘smuggling in’ linguistic information in order to improve performance as demonstrated by both automatic and human evaluation. Until just three or four years ago, SMT was undoubtedly state-of-theart, but NMT has recently emerged, and in academic circles at least, appears to be so promising that many protagonists are already claiming it to have surpassed

312

The Bloomsbury Companion to Language Industry Studies

the performance of SMT. In this chapter, I will consider the extent to which it is appropriate at this juncture to make this call; SMT remains dominant in the translation industry among many translation providers, but the big players like Google Translate and Bing Translator2 have already launched NMT systems for many of their language pairs. When SMT was launched, many practitioners advocated a ‘pure’ approach, where the strategy taken was ‘let the data decide’; no data cleaning or annotation was countenanced, at least initially, so that whatever quality was obtained was due entirely to the intrinsic characteristics of the approach rather than any pre-processing techniques. Nonetheless, as mentioned above, SMT system developers observed improvements in performance as measured by automatic evaluation metrics when introducing linguistic information into the enginebuilding process. With the advent of NMT, similarly ‘pure’ approaches to NMT are in vogue, but I question whether here too quality will improve only if syntactic, semantic and discourse features are integrated. It is clear that human translators have for some time now been using translation memory (TM) systems (Heyn 1998) to good effect. Many researchers have demonstrated that SMT and TM can be integrated to improve translator productivity (e.g. Ma et al. 2011; Bulté et al. 2018), and these benefits now appear fairly regularly in today’s industry-leading CAT tools. TM integration has yet to be done for NMT, and given that I expect TM technology to remain as an essential tool in the translator’s armoury for some time to come, I will consider how such integration might be brought about. I will also discuss how MT quality is measured, the extent to which ‘traditional’ MT evaluation is equipped to demonstrate improvements delivered by NMT, what human evaluations are currently adding to the mix, and how emerging use cases where there is no place for human translators cause us to fundamentally question the notion of quality.

2.  The rise and fall of different MT paradigms In this section, I provide a brief history of the major paradigms that have been put forward in MT. As Figure 14.1 illustrates schematically, different MT models have been in vogue at different times. Early MT systems were entirely rule-based, but in the 1980s corpus-based models came along and became the state-of-the-art by the mid-1990s. In just the last few years, the advent of NMT has really shaken up both academia and the wider MT and translation industry,

Machine Translation: Where Are We at Today?

313

MT Quality

Progress in MT

Statisticd MT (IBM- Phrasebased Hiero Brown/ Mercer) (Och) (Chiang) Georgetown IBM

Neural MT

Lots of remaining problems!

METEO Alpac

1954

1966

1982

1993

2003

2005

2016

Figure 14.1  Progress in machine translation over the years (Luong et al. 2016).

and now appears to have taken over the mantle from SMT as the dominant paradigm today. In the next sections, I describe briefly the mechanics of each of these system types, as well as how the field reacted when they were suggested as competing paradigms to the dominant approaches of the day.

2.1.  From rule-based MT to SMT As I describe in Way (2009), when Peter Brown of IBM (at the time) stood up at TMI in Pittsburgh and again at COLING in Budapest in 1988 and presented SMT as an alternative to rule-based translation, significant players in traditional approaches to MT were astonished. Pierre Isabelle’s reaction was, ‘We were all flabbergasted. All throughout Peter’s presentation, people were shaking their heads and spurting grunts of disbelief or even of hostility.’ Harold Somers noted, ‘The audience reaction was either incredulous, dismissive or hostile’, while Walter Daelemans observed that ‘the Leuven Eurotra people weren’t very impressed by the talk and laughed it away as a rehash of “direct” (word-by-word) translation’. Prior to Brown et al. (1988a,b), rule-based MT (RBMT) was divided into two camps: transfer-based MT and interlingual MT. The Vauquois Pyramid (see Figure 14.2) visualizes schematically what was involved in building such systems quite succinctly, with the length of each arrow corresponding to roughly the amount of work done by each component. Transfer and interlingual systems were both known as indirect, secondgeneration approaches to MT, and were compared to direct, first-generation MT systems. As can be seen in Figure 14.2, these latter did very little analysis (‘parsing’) of the source language or structural generation of the target language;

314

The Bloomsbury Companion to Language Industry Studies

Figure 14.2  The Vauquois Pyramid depicting the three main approaches to RBMT: direct translation, transfer-based and interlingual MT (Vauquois 1968).

that they worked at all was attributable to their very large bilingual dictionaries. Somewhat surprisingly, these systems enjoyed a relatively long shelf life, partly due to the fact that they were very robust and, compared to indirect systems, always produced some output, which between ‘similar’ languages (e.g. Portuguese and Spanish) could often be very reasonable indeed. In contrast, given that indirect systems depended on parsing (to different depths) the source-language input, they were explicitly designed to rule out ill-formed input. When I worked on Eurotra (King and Perschke 1984) from 1988-1991, we wrote explicit test suites (Arnold et al. 1993) containing well-formed sentences that the analysis component ought to parse correctly and pass on an appropriate representation to the transfer stage, as well as ill-formed strings that the analysis stage should decree ungrammatical and cause further processing to cease. There were two main problems with such an approach: (i) there was a general assumption that people would always try to input well-formed sentences into an MT system and (ii) given that the parser was based on a set of handcrafted rules by an expert linguist which, given the complexity of natural language, would inevitably be incomplete, the system could not tell the difference between a truly ill-formed string and well-formed input that simply was not covered by the set of linguistic rules in the grammar. Note that in transfer-based systems, the three processes do ‘about the same’ amount of work: the source string is parsed into a syntactic (constituency or dependency) tree indicating the main actors in the sentence as well as any modifiers; this source-language representation is then passed to the transfer component per se, where appropriate lexical, syntactic and semantic rules generated a ‘meaning-equivalent’ target-language structure. This target

Machine Translation: Where Are We at Today?

315

dependency tree is then input into the generation (or ‘synthesis’) phase, where a set of target-language rules try to produce an appropriate translation. In contrast, in interlingual systems, there was no explicit transfer phase, so that the output from the deep analysis phase was exactly the same as the input to the deep synthesis phase. While this was very attractive in theory, it proved impossible to bring about from a practical perspective. Languages simply don’t act the same way, with different languages having different ways of representing similar concepts. For example, in English, the periphrastic expression ‘to bake with cheese on top’ has to be used, while French has a single lexical item – gratiner – to represent the same concept. Assuming an interlingual system involving French, English and Japanese, the amount of work that would need to be done in (say) a French-to-English engine just because Japanese has different words for ‘my mother’, ‘your mother’ and ‘mothers in general’ would be wasteful, given that neither French nor English has different lexical entries for these concepts (Hutchins and Somers 1992). At the onset of SMT in the late-1980s, it was clear what camp you were in: either the transfer camp or the interlingual camp. However, in Way (2009), I noted that pretty quickly, these two camps merged to form a de facto alliance against this arrogant statistical newcomer which was set to undo all that they stood for. Despite this resistance, the language used early on by the new statistical practitioners was conciliatory, indicating a hope that the two communities would work together for the betterment of the discipline. I lamented then that this did not happen, and that this impaired the creation and adoption of the syntax-based systems that came onstream in the late 2000s. Nonetheless, certainly by the mid-1990s, SMT had come to be dominant, largely due to the very influential IBM models laid out in Brown et al. (1993), one of the seminal papers in the field. At this time, however, most SMT was word-based, which was odd when one considers that example-based MT (Nagao 1984; Carl and Way 2003) had from its very inception considered the phrase – not the word – as the primary linguistic construct to be used as the unit of translation. Koehn, Och and Marcu (2003) demonstrated how SMT might work in a phrase-based manner, and with the advent of tools like Giza++ (Och and Ney 2003, for word alignment) and the Moses toolkit (including phrase alignment) in 2007, phrase-based SMT (PBMT) became the dominant paradigm for the next ten years. Larger and larger amounts of SMT training data came onstream (e.g. Europarl; see Koehn 2005), which allowed better and better PBMT models to be built, but only for those languages and genres where sufficiently large sets of aligned source-target sentences existed. The licence issued with Moses allowed

316

The Bloomsbury Companion to Language Industry Studies

it to be used commercially, so SMT systems were quickly deployed to good effect in real industrial scenarios.3 SMT was robust and capable of very good translation output, but suffered from problems such as the omission of targetlanguage equivalents to parts of the source sentence (including on occasion really important words like not), and wrong target-language word order. As with RBMT, PBMT worked especially well between closely-related languages, and much less well when translating into morphologically complex languages (like German). It was certainly the case when SMT first came along that most system developers relied solely on larger and larger amounts of training data to deliver improvements in translation quality, as measured by automatic metrics like BLEU (Papineni et al. 2002), METEOR (Banerjee and Lavie 2005) and translation edit rate (TER) (Snover et al. 2006). However, little by little, SMT engine-builders began to realize that the only way to break through the performance ceiling – often a pretty good level of quality, however – was to integrate additional syntactic and semantic information (e.g. Chiang 2005). By 2015 or so, such linguistically informed PBMT systems were acknowledged to be the state-ofthe-art in the field.

2.2.  From SMT to NMT However, around this time, researchers (including many newcomers to the field of MT) started to demonstrate that NMT systems could be built with good performance. While the preferred system set-ups were not so different from those that had been conceived some time before (e.g. Forcada and Ñeco 1997), the hardware that facilitated the huge explosion in computation required was now sufficiently powerful to allow these systems to be built in practice. The first NMT systems started off using convolutional neural nets (Kalchbrenner and Blunsom 2013), but could not beat a PBMT baseline (named ‘cdec’; see Dyer et al. 2010). Improvements were seen quite quickly with the first encoder-decoder frameworks (Sutskever, Vinyals and Le 2014), which were subsequently extended with a source-language attention model (Bahdanau, Cho and Bengio 2014). While further improvements have been seen in the interim, this set-up – an encoder-decoder model with attention – remains pretty much the state-of-the-art today. What really disrupted the field were the results achieved by NMT systems at the International Workshop on Spoken Language Translation in 2015.4 Luong and Manning (2015) demonstrated clear wins over a range of different

Machine Translation: Where Are We at Today?

317

SMT systems for English-to-German, a significantly difficult language pair, in terms of automatic evaluation scores. Bentivogli et al. (2016) performed an in-depth human evaluation of exactly how the NMT model of Luong and Manning (2015) improved in terms of quality, noting that significantly fewer morphological, lexical and word-order errors were made compared to SMT. They also demonstrated that NMT lowered overall post-editing effort by about 25 per cent. One of the main reasons why NMT improves compared to SMT on a range of use cases is that once the source sentence has been processed by the encoder, the full context of the sentence is available to the decoder for consideration as to what target-language words and phrases should be suggested as part of the translation. That is, all source words and their context – which are known as word embeddings (cf. Mikolov et al. 2013), or how each word relates to each other in the particular sentence at hand – are encoded in a single numerical representation (a vector of numbers indicating the final state of the encoder) which is sent to the decoder to generate a target-language string. In SMT, a source sentence is only translated using lexical and phrasal chunks; unless it is very short, it is never translated en bloc. Clearly having a window on the full source sentence is advantageous compared to a restriction of just a few words at a time, but managing all that information is not trivial. The encoder-decoder architecture works well, but significant improvement came about when the source-language attention model was added. Rather than accepting that all source words are equally important in suggesting all targetlanguage words, the attention model (similar to word and phrase alignments in SMT) demonstrates which source words are most relevant when it comes to hypothesizing target-language equivalents. In practice, this means that each translation is generated from specific encoder states, with information which is much less relevant from other words – perhaps some distance away from the current word of focus and of little or no relevance to its translation – being ignored.

3.  Is NMT the new state-of-the-art? While the study by Bentivogli et al. (2016) was significant and far-reaching, it has to be noted that it only examined one language pair (English-to-German) and one use case (TED talks). Further studies (e.g. Castilho et al. 2017) have shown that there are situations where PBMT can still beat NMT in terms of

318

The Bloomsbury Companion to Language Industry Studies

both human and automatic evaluation. It is widely recognized that much larger amounts of training data are needed for good NMT performance compared to SMT (cf. Koehn and Knowles 2017), and training and translation times remain much slower than for SMT. Nonetheless, many MT practitioners believe that NMT is – or at least will be very soon – the new state-of-the-art, to the extent that all MT papers in the very top academic conferences feature only NMT models, and Moses scores are only given as comparative baseline levels of quality.

3.1.  How is MT quality measured? In Way (2018), I note that there are three ways in which MT quality is typically measured: via human evaluation, automatic evaluation and task-based evaluation. In the former, human raters are asked to select from (i) a (more or less) fine-grained numerical scale for fidelity (or accuracy or adequacy), the extent to which a translated text contains the same information as the source text, and (ii) intelligibility (or fluency), the extent to which the output sentence is a well-formed example of the target language.5 While such evaluations are (usually) very informative, they are subjective, often inconsistent and take a long time to carry out. Accordingly, as is often the case in MT, insights from speech recognition were brought to bear in this field too, in particular Word error rate (WER; Levenshtein 1966), and Position-Independent Word Error Rate (PER; see Tillmann et al. 1997). However, it wasn’t until the BLEU metric came in that MT evaluation per se took off. BLEU (and NIST, which came along around the same time; see Doddington 2002) used different (but related) ways to compute the similarity between one or more human supplied ‘gold standard’ references and the MT output string based (largely) on n-gram co-occurrence. In Way (2018), I provide a number of problems with such metrics, as well as others arising from their (mis)use in the field. I will not rehash those here, but ultimately an MT system needs to be used for a particular use case, which is where task-based evaluation comes in: who is the translation actually for? As I point out in that paper: WMT evaluations regularly include specific tasks nowadays, including medical translation (e.g. Zhang et al. 2014), automatic post-editing (e.g. Chatterjee et al. 2015) and MT for the IT domain (e.g. Cuong et al. 2016). We take this as evidence that the community as a whole is well aware of the fact that when evaluating MT quality, the actual use-case and utility of the translations therein need to be borne in mind. (Way 2018: 170)

Machine Translation: Where Are We at Today?

319

3.2.  Does MT evaluation need to change with NMT coming onstream? It has to be acknowledged that the problem of MT quality assessment is an unsolved one, and research efforts are ongoing to improve on the metrics that are commonly used today. One question that is worth asking is the extent to which such metrics are sufficiently discriminative to accurately demonstrate the real improvement that NMT offers over SMT. The translational improvements discovered by Bentivogli et al. (2016) are astonishing, especially bearing in mind that PBMT had been the dominant paradigm for twenty-five years or so, and that NMT has only come in as a realistic alternative in the past four years. In my opinion, n-gram-based metrics such as BLEU significantly underplay the real benefit to be seen when NMT output is evaluated. As I note in Way (2018), it simply cannot be the case that a 2-point improvement in BLEU score – almost an irrelevance on a real industrial translation use case – which was typically seen in WMT-2016 where NMT systems swept the board on all tasks and language pairs (Sennrich, Haddow and Birch 2016), can be reflective of the improvements in word order and lexical selection noted by Bentivogli et al. (2016). Note that Shterionov et al. (2018) actually computed the degree of underestimation in quality of three popular automatic evaluation metrics – BLEU, METEOR and TER – showing that for NMT, this may be up to 50 per cent. Metrics such as ChrF (Popović 2015) which operate at the character level – or combinations of word- and character-based models (e.g. Chung, Cho and Bengio 2016; Luong and Manning 2016)6 – may be a move in the right direction, but the field will doubtless see new metrics tuned particularly to NMT in the very near future.

3.3.  Is the translation industry ready to provide NMT? Let us suppose that NMT either already is the new state-of-the-art in terms of MT quality or very soon will be. The big free online players – Google Translate and Bing Translator – have both switched over at least some of their language pairs to NMT models. Note that Amazon AWS only very recently announced their own NMT service,7 so there is no doubt that where the largest multinational companies are concerned, the decision has been made to throw in their lot with NMT. Accordingly, those language service providers (LSPs) who rely on online MT provided by third-parties such as these will already have benefitted from the

320

The Bloomsbury Companion to Language Industry Studies

improvements in quality afforded by NMT. But what about those MT providers who have developed services in-house around the Moses platform? I have already noted that neural MT engine training times are much slower – typically of the order of several weeks – than their SMT counterparts, so much so that people are claiming PBMT training times to be fast nowadays, although of course nothing has changed in that regard. It is simply the case that in comparison, NMT model training is incredibly slow, with billions of mathematical optimizations needed until the neural net converges to its optimal set-up. I have also noted that typically data an order of magnitude larger are needed to train a good NMT model compared to PBMT, and it is a fact that these datasets do not exist for almost all industry clients. In addition, the hardware needed to train an NMT system is expensive; we do this using GPU chips, each of which contain thousands of CPUs which can perform calculations in parallel. Assuming most suppliers of customized MT engines do not have such hardware in-house, but rather rely on cloud-based services, the cost of additional MT engine training will have to be passed on to clients, although the latter should see most if not all of this returned by the huge improvements in MT quality and resultant decrease in post-editing effort required. Those forward-thinking translators who have already integrated MT into their pipeline should benefit immediately from the improvements in MT quality to be seen. As I noted in the previous section, current MT quality assessment metrics are insufficiently discriminative to provide a realistic representation of the absolute improvement in quality seen with NMT, so it is open to doubt as to whether LSPs will be able to reflect this better quality in terms of higher levels of TM fuzzy matches (Sikes 2007), with the concomitant reductions in pay to translators who are post-editing MT output. It seems to me that this is a good time for translators who have yet to use MT in their translation workflow to consider doing so without delay, as their productivity should rise pretty quickly, while LSPs are still tied in to post-editing rates of pay related to SMT. MT has been integrated very well now with existing TM tools, with TM matches above well-defined thresholds being suggested to translators for postediting, and MT used for all segments below such thresholds.8 NMT should not make too much of a difference here, except that, even more so than SMT, NMT output can be deceptively fluent. Sometimes perfect target-language sentences are output, and less thorough translators and proofreaders may be seduced into accepting such translations, despite the fact that they may not be related to the source sentence at hand at all. In contrast, when the attention model provides too much focus on particular source-language words, errors such as those in Figure 14.3 can be seen. Fortunately, these are easy to spot.

Machine Translation: Where Are We at Today?

321

Figure 14.3  Google NMT error due to an overly attentive Attention Model (18 July 2017).

Finally, on the subject of quality, for a long time SMT models failed to deliver good enough quality for English-to-Japanese for Japanese translators to even consider post-editing MT output. Mike Dillinger (personal communication) now informs me that the quality seen from NMT is leading them to reconsider, and the sorts of questions being asked are exactly the same as those raised years ago in the scope of European languages.

4.  Informing research through the industry As I stated at the outset, MT has never been as popular, and pretty much everyone in the industry knows that they have to embrace it as an enabling technology. Rather than outsource their MT requirements, companies like Google, Microsoft, Facebook, eBay and Amazon have (understandably) been recruiting leading academics to build their own internal MT offerings for some time now. However, the result is that the relatively few MT centres of excellence that existed five years ago have become even rarer. I also took three years leave of absence to build industry-leading customized MT engines for two translation companies in the UK, but decided to return to my academic position to keep the MT team at my university together. One knock-on benefit to academic MT teams like my own is that excellent staff can be recruited from disbanded academic MT teams. However, at the same time, professionals with artificial intelligence (AI) and machine learning (ML) skills are highly prized, and the

322

The Bloomsbury Companion to Language Industry Studies

discrepancy between rates of pay which has always been there in academia and industry is widening at a rate of knots.9 Despite taking on leaders of academic teams into their companies, industry leaders are in the same breath bemoaning the fact that they are unable to recruit MT developers, as there are not enough trained experts coming from academic programmes to fill all the vacancies currently available. But they cannot have it both ways: if they recruit the leaders of large, renowned academic groups, who used to train the MT developers of tomorrow, they should not be surprised when the number of such potential recruits falls away. What MT academics want, therefore, is for industry to petition government to obtain more support for MT, AI and ML in academia, so that the industrial community can be served to our mutual advantage.10 Note, however, that this cannot all be centrally funded by government. While there is no doubt that attracting research hubs of multinational companies pays off considerably – not just in terms of direct employment and return to the exchequer but also as it pertains to ancillary services – if those companies want a steady stream of suitably equipped new staff with up-to-date skill sets trained by the best available lecturing staff, then they too will need to (at least in part) pay towards the tools and services required for their education. As this problem resolves itself, further and deeper collaboration between industry and research is likely to be seen. More and more researchers are interested not just in an academic publication, but also in solving real problems not just of benefit to industry, but also relevant to their fellow academics. While many authors of papers at leading conferences in the field seldom consider potential end-users, it was recently announced that, from 2018, the North American branch of the Association for Computational Linguistics (NAACL) will feature an industry track, focusing on disseminating results which apply cutting-edge research to real-world problems. While plenty of such work exists already (e.g. Wang et al. 2016; Calixto, Liu and Campbell 2017), anything which explicitly gets more researchers to try to focus on truly impactful endeavours as opposed to being strictly of academic value is to be welcomed.

5.  Informing the industry through research There are differing views on whether users of a technology need to know the principles on which it is founded in order to (i) understand how the outputs are formed, and (ii) try to improve the underlying technology. Assuming that

Machine Translation: Where Are We at Today?

323

knowing how an MT system is built is useful, there is no doubt that nonexperts found the principles of SMT hard to understand. In two companion papers (Hearne and Way 2011; Way and Hearne 2011), we provided an explanation of SMT for linguists and translators which attracted positive feedback. Accordingly, translators that have already embraced MT have just about gotten their heads around SMT and how it works, but now NMT looks like eclipsing that framework. While PBMT quickly consolidated around the Moses toolkit, in contrast there is a proliferation of deep neural net tools in existence which NMT developers can use, including Tensorflow,11 OpenNMT,12 PyTorch13 and Nematus.14 Again, unsurprisingly, many non-experts – even those who have been around the language industry for some time – find recent research papers on NMT unintelligible. I have already provided a high-level explanation of how an NMT system works in Section 2.2, and I hope that my description of the encoderdecoder system with attention is understandable to a broad audience; interested readers should consult Forcada (2017) for another explanation of NMT for nonexperts. NMT is just one example of sequence-to-sequence learning using a neural network; others include text summarization and speech recognition. Essentially the neural network comprises (i) a set of input nodes, (ii) a set of hidden nodes (in one or more hidden layers; if there is more than one, the network is said to be ‘deep’, hence ‘deep learning’), and (iii) a set of output nodes, as in Figure 14.4. Each input node is connected to each hidden unit, and each hidden unit is connected to each output node; if there is more than one hidden layer, then each node in one hidden layer is connected to all the nodes in the next hidden layer, and so on. The mathematical complexity of deep learning comes about as the weight (or importance) of every connection between every node needs to be optimized. One way of doing this in the training phase is to (i) initially assign random weights to each connection,15 (ii) supply a set of inputs where the outputs are known (e.g. if a vectoral representation of an image of a bird is input, the neural net can be expected to predict the label ‘bird’, not ‘dog’), and (iii) percolate (via ‘back-propagation’; see Rumelhart, Hinton and Williams 1986) any errors back through the network until the weights are optimally adjusted and no further improvement in accuracy can be seen. The neural net is then frozen, and new inputs are provided to the system (in the testing phase) and the accuracy is evaluated by observing how often the neural net accurately predicts the correct label.

324

The Bloomsbury Companion to Language Industry Studies

Most people acknowledge that MT is one of the hardest problems we are trying to address in computer science, as so many inputs (each word in each sentence) are required, many hidden units in many hidden layers are needed, and many outputs (i.e. possible translations) may be observed. Note too that unlike the feed-forward neural network in Figure 14.4, most state-of-the-art networks are recurrent, meaning that some units are linked to themselves; this permits inputs and outputs of any size, whereas feed-forward networks allow only fixed-length inputs and outputs. Accordingly, it can be seen quite quickly that the number of calculations is astronomical. Billions of tweaks of the weights are needed before the optimal configuration of the neural net is arrived at, and no further improvement can be seen. This is why GPUs – excellent at performing calculations on matrices, which are standard structures used in neural processing – are needed for network training, although decoding can be run fairly smoothly on CPUs. The mathematical underpinning of neural networks is fairly hair-raising, but I anticipate that the description of neural nets provided together with my description of the state-of-the-art NMT model today and the explanation provided in Forcada (2017) will suffice for most translators to know how the whole set-up works, and how they may help deep-learning engineers improve their systems. At the same time, I trust that this will prove useful to other industry players not au fait with the current technology, in order for them to consider using NMT to underpin their translation services.

Figure 14.4  A schematic depiction of a feed-forward neural network.

Machine Translation: Where Are We at Today?

325

6.  Concluding remarks This chapter has undertaken several challenges: (i) to discuss different MT paradigms, (ii) to argue which of those paradigms might be considered stateof-the-art today, (iii) to explain to non-experts how neural MT works, (iv) to discuss whether today’s automatic evaluation metrics are sufficiently finegrained to accurately reflect the dramatic improvements we have seen recently in terms of MT quality, and (v) to reflect on the relationship between academia and industry in the field of MT today. We can conclude that if NMT is not already the state-of-the-art in the field, it certainly has the potential to become so, and very soon. Accordingly, I trust that the description of the underlying deep-learning technology and the stateof-the-art NMT configuration may benefit a wide range of non-experts, who might be struggling to understand how this new paradigm actually works, why it outperforms PBMT, but also what problems remain to be solved. These can be examined both from a research perspective – including providing fine-grained MT evaluation metrics to accurately reflect the considerable improvement in MT quality that has recently been seen – and an optimization point of view, especially in terms of improving NMT engine training times. By providing an insight into how academia and industry need to help each other in these turbulent times, I anticipate that this will contribute to the building of stronger bridges between academic research and the language industry. As we concluded in Way and Hearne (2011), this collaboration is sorely needed if the field as a whole is to benefit. While SMT appears to have only a limited future, with NMT having emerged as the dominant force in MT, such collaboration is needed more than ever. It is encouraging to see that the lesson learned by SMT practitioners regarding the improvements to be seen by incorporating linguistic information seem to be being taken on board by some NMT practitioners, at least (cf. García-Martínez, Barrault and Bougares 2016; Sennrich and Haddow 2016). With the considerable improvements in MT quality that have been seen in recent times has come an increase in hype, most notably from journalists, most of whom do not understand how the technology works, but also from MT developers such as the Google and Microsoft NMT teams. The claim by Wu et al. (2016: 1) that Google NMT was ‘bridging the gap between human and machine translation [quality]’ led to considerable hyperbole and hysteria from different camps, which was amplified recently by the claim by Hassan et al. (2018) that Microsoft had ‘achieved human parity’ in terms of translation quality.16 Those of us who have seen many paradigms come and go know that overgilding the

326

The Bloomsbury Companion to Language Industry Studies

lily does none of us any good, especially those of us who have been trying to build bridges between MT developers and the translation community for many years. The human-in-the-loop will always remain the most important link in the chain, at least where dissemination of translated material is required. All that MT system developers are trying to do is improve the output from their systems to make technology-savvy translators more productive. MT systems are unlikely ever to ‘bridge the gap’ or achieve human parity with human quality translation. Just because a new paradigm is in vogue, it does not mean that MT has become easy, or a solved problem, as some would like to make out (e.g. Goodfellow, Bengio and Courville 2016: 473). Let us see how many of the newcomers to MT are still here in a decade; my prediction is that a good percentage of them will indeed discover that MT is too difficult, and that certain problems remain hard to solve, just like they have always been, which is why translators are very much still needed, and always will be.

Acknowledgements Many thanks indeed to the anonymous reviewers of this chapter, as well as to Mikel Forcada, whose comments served to improve this chapter considerably. This work has been supported by the ADAPT Centre for Digital Content Technology which is funded under the SFI Research Centres Programme (Grant 13/RC/2106) and is co-funded under the European Regional Development Fund.

Notes 1 https://translate.google.com 2 https://www.bing.com/translator 3 The first commercial system, LanguageWeaver, was based not on Moses but rather on the SMT models of Kevin Knight and Dan Marcu at ISI (cf. Benjamin et al. 2003). 4 http://workshop2015.iwslt.org/ 5 Here I give primacy to the terms originally used in the ALPAC report (Pierce et al. 1966), with more usual terms given in parentheses. 6 In order to mitigate the problem of unknown words, character-based NMT models were proposed; if a word is unknown at the level of the lemma, some translation knowledge may be available at the subword level. Passban, Liu and Way (2018)

Machine Translation: Where Are We at Today?

327

demonstrate that splitting lemmas into roots and morphemes in a principled linguistic manner outperforms such arbitrary subword models. 7 https://aws.amazon.com/translate/ 8 Moorkens and Way (2016) discuss the extent to which translation jobs should be carved up in this way, as well as how MT output is significantly preferred to TM matches when fuzzy-match thresholding is removed. 9 https://www.nytimes.com/2017/10/22/technology/artificial-intelligence-expertssalaries.html 10 This is starting to happen in my own country, Ireland: https://irishtechnews.ie/ irelands-first-industry-driven-masters-in-artificial-intelligence-is-launched/ 11 https://www.tensorflow.org/ 12 http://opennmt.net 13 https://pytorch.org/ 14 https://github.com/rsennrich/nematus 15 Including to ‘bias’ nodes, which are connected to each hidden unit to prevent that hidden unit from being ‘switched off ’ in case of a zero-sum input. 16 Even more recently, SDL announced on 19 June 2018 that they had ‘cracked Russian to English Neural Machine Translation’: https://www.sdl.com/about/news-media/ press/2018/sdl-cracks-russian-to-english-neural-machine-translation.html. As with the other similar claims, this has met with some incredulity on social media platforms, including by this author (@tarfandy).

References Arnold, D., D. Moffat, L. Sadler and A. Way (1993), ‘Automatic Generation of Test Suites’, Machine Translation, 8: 29–38. Bahdanau, D., K. Cho and Y. Bengio (2014), ‘Neural Machine Translation by Jointly Learning to Align and Translate’, eprint arXiv:1409.0473 (https://arXiv.org/ abs/1409.0473). Banerjee, S. and A. Lavie (2005), ‘METEOR: An Automatic Metric for MT Evaluation with Improved Correlation with Human Judgments’, in ACL 2005, Proceedings of the Workshop on Intrinsic and Extrinsic Evaluation Measures for MT and/or Summarization at the 43rd Annual Meeting of the Association for Computational Linguistics, 65–72, Ann Arbor. Benjamin, B., L. Gerber, K. Knight and D. Marcu (2003), ‘Language Weaver: The Next Generation of Machine Translation’, MT Summit IX, 229–31, New Orleans. Bentivogli, L., A. Bisazza, M. Cettolo and M. Federico (2016), ‘Neural versus PhraseBased Machine Translation Quality: A Case Study’, in Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, 257–67, Austin.

328

The Bloomsbury Companion to Language Industry Studies

Brown, P., J. Cocke, S. Della Pietra, V. Della Pietra, F. Jelinek, R. Mercer and P. Roossin (1988a), ‘A Statistical Approach to French/English Translation’, in Second International Conference on Theoretical and Methodological Issues in Machine Translation of Natural Languages, 810–28, Pittsburgh. Brown, P., J. Cocke, S. Della Pietra, V. Della Pietra, F. Jelinek, R. Mercer and P. Roossin (1988b), ‘A Statistical Approach to Language Translation’, in Coling Budapest: Proceedings of the 12th International Conference on Computational Linguistics (Vol. 1), 71–6, Budapest. Brown, P., S. Della Pietra, V. Della Pietra and R. Mercer (1993), ‘The Mathematics of Statistical Machine Translation: Parameter Estimation’, Computational Linguistics, 19 (2): 263–311. Bulté, B., T. Vanallemeersch and V. Vindeghinste (2018), ‘M3TRA: Integrating TM and MT for Professional Translators’, in EAMT2018, Proceedings of the 21st Annual Conference of the European Association for Machine Translation, 69–78, Alacant. Calixto, I, Q. Liu and N. Campbell (2017), ‘Doubly-Attentive Decoder for Multimodal Neural Machine Translation’, in Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), 1913–24, Vancouver. Carl, M. and A. Way, eds (2003), Recent Advances in Example-Based Machine Translation. Dordrecht: Kluwer Academic Publishers. Castilho, S., J. Moorkens, F. Gaspari, I. Calixto, J. Tinsley and A. Way (2017), ‘Is Neural Machine Translation the New State-of-the-Art?’, Prague Bulletin of Mathematical Linguistics, 108: 109–20. Chatterjee, R., M. Turchi and M. Negri (2015), ‘The FBK Participation in the WMT15 Automatic Post-editing Shared Task’, in Proceedings of the Tenth Workshop on Statistical Machine Translation, 210–5, Lisbon. Chiang, D. (2005), ‘A Hierarchical Phrase-based Model for Statistical Machine Translation’, in ACL-2005: 43rd Annual meeting of the Association for Computational Linguistics, 263–70, Ann Arbor. Chung, J., K. Cho and Y. Bengio (2016), ‘A Character-level Decoder without Explicit Segmentation for Neural Machine Translation’, in Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), 1693–703, Berlin. Cuong, H., S. Frank and K. Sima’an (2016), ‘ILLC-UvA Adaptation System (Scorpio) at WMT’16 IT-DOMAIN Task’, in Proceedings of the First Conference on Machine Translation, 423–27, Berlin. Doddington, G. (2002), ‘Automatic Evaluation of Machine Translation Quality Using N-gram Co-occurrence Statistics’, in HLT 2002: Human Language Technology Conference: Proceedings of the Second International Conference on Human Language Technology Research, 138–45, San Diego. Dyer, C., A. Lopez, J. Ganitkevitch, J. Weese, F. Ture, P. Blunsom, H. Sewatian, V. Eidelman and P. Resnik (2010), ‘cdec: A Decoder, Alignment, and Learning

Machine Translation: Where Are We at Today?

329

Framework for Finite-state and Context-free Translation Models’, in Proceedings of the ACL 2010 System Demonstrations, 7–12, Uppsala. Forcada, M. (2017), ‘Making Sense of Neural Machine Translation’, Translation Spaces, 6 (2): 291–309. Forcada, M. and R. Ñeco (1997), ‘Recursive Hetero-associative Memories for Translation’, in Biological and Artificial Computation: From Neuroscience to Technology (International Work-Conference on Artificial and Natural Neural Networks, IWANN’97, Proceedings), 453–62, Lanzarote. García-Martínez, M., L. Barrault and F. Bougares (2016), ‘Factored Neural Machine Translation’, eprint arXiv:1609.04621 (https://arxiv.org/abs/1609.04621). Goodfellow, I., Y. Bengio and A. Courville (2016), Deep Learning. Cambridge: MIT Press. Hassan, H., A. Aue, C. Chen, V. Chowdhary, J. Clark, C. Federmann, X. Huang, M. Junczys-Dowmunt, W. Lewis, M. Li, S. Liu, T-Y. Liu, R. Luo, A. Menezes, T. Qin, F. Seide, X. Tan, F. Tian, L. Wu, S. Wu, Y. Xia, D. Zhang, Z. Zhang and M. Zhou (2018), ‘Achieving Human Parity on Automatic Chinese to English News Translation’, eprint arXiv: 1803.05567 (https://arxiv.org/abs/1803.05567). Hearne, M. and A. Way (2011), ‘Statistical Machine Translation: A Guide for Linguists and Translators’, Language and Linguistics Compass, 5: 205–26. Heyn, M. (1998), ‘Translation Memories – Insights & Prospects’, in L. Bowker, M. Cronin, D. Kenny and J. Pearson (eds), Unity in Diversity? Current Trends in Translation Studies, 123–36, Manchester: St. Jerome. Hutchins, W.J. and H.L. Somers (1992), An Introduction to Machine Translation. London: Academic Press. Kalchbrenner, N. and P. Blunsom (2013), ‘Recurrent Continuous Translation Models’, in Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing, 1700–9, Seattle. King, M. and S. Perschke (1984), ‘EUROTRA’, in M. King (ed.), Machine Translation Today: The State of the Art. Proceedings of the Third Lugano Tutorial, 373–91, Edinburgh: Edinburgh University Press. Koehn, P. (2005), ‘Europarl: A Parallel Corpus for Statistical Machine Translation’, in MT Summit X, Conference Proceedings: the Tenth Machine Translation Summit, 79–86, Phuket. Koehn, P., H. Hoang, A. Birch, C. Callison-Burch, M. Federico, N. Bertoldi, B. Cowan, W. Shen, C. Moran, R. Zens, C. Dyer, O. Bojar, A. Constantin and E. Herbst (2007), ‘Moses: Open Source Toolkit for Statistical Machine Translation’, in ACL 2007: Proceedings of Demo and Poster Sessions, 177–80, Prague. Koehn, P. and R. Knowles (2017), Six Challenges for Neural Machine Translation’, in Proceedings of the First Workshop on Neural Machine Translation, 28–39, Vancouver. Koehn, P., F. Och and D. Marcu (2003), ‘Statistical Phrase-based Translation’, in HLTNAACL 2003: Conference Combining Human Language Technology Conference Series and the North American Chapter of the Association for Computational Linguistics Conference Series, 48–54, Edmonton.

330

The Bloomsbury Companion to Language Industry Studies

Levenshtein, V. I. (1966), ‘Binary Codes Capable of Correcting Deletions, Insertions, and Reversals’, Soviet Physics Doklady, 10 (8): 707–10. Luong, M-T., K. Cho and C. Manning (2016), ‘Neural Machine Translation’, tutorial  presented at 54th Annual Meeting of the Association for Computational Linguistics, Berlin. (https://nlp.stanford.edu/projects/nmt/Luong-Cho-ManningNMT-ACL2016-v4.pdf). Luong, M-T. and C. Manning (2015), ‘Stanford Neural Machine Translation Systems for Spoken Language Domains’, in Proceedings of the 12th International Workshop on Spoken Language Translation, 76–9, Da Nang. Luong, M-T. and C. Manning (2016), ‘Achieving Open Vocabulary Neural Machine Translation with Hybrid Word-Character Models’, in Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), 1054–63, Berlin. Ma, Y., Y. He, A. Way and J. Van Genabith (2011), ‘Consistent Translation Using Discriminative Learning: A Translation Memory-inspired Approach’, in ACL-HLT 2011: Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics, 1239–48, Portland. Mikolov, T., K. Chen, G. Corrado and J. Dean (2013), ‘Efficient Estimation of Word Representations in Vector Space’, in Proceedings of a Workshop at the International Conference on Learning Representations (ICLR-13), Scottsdale. Moorkens, J. and A. Way (2016), ‘Comparing Translator Acceptability of TM and SMT Outputs’, Baltic Journal of Modern Computing, 4 (2):141–51. Nagao, M. (1984), ‘A Framework of a Mechanical Translation between Japanese and English by Analogy Principle’, in A. Elithorn and R. Banerji (eds), Artificial and Human Intelligence: Edited Review Papers Presented at the International NATO Symposium, 173–80, Amsterdam: North Holland. Och, F. and H. Ney (2003), A Systematic Comparison of Various Statistical Alignment Models’, Computational Linguistics, 29 (1): 19–51. Papineni, K., S. Roukos, T. Ward and W. J. Zhu (2002), ‘BLEU: A Method for Automatic Evaluation of Machine Translation’, in ACL-2002: 40th Annual Meeting of the Association for Computational Linguistics, 311–18, Philadelphia. Passban, P., Q. Liu and A. Way (2018), ‘Improving Character-based Decoding Using Target-Side Morphological Information for Neural Machine Translation’, in Proceedings of HLT-NAACL 2018, the 16th Annual Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, New Orleans, 58–68, New Orleans. Pierce, J., J. Carroll, E. Hamp, D. Hays, C. Hockett, A. Oettinger and A. Perlis (1966), Language and Machines – Computers in Translation and Linguistics. ALPAC report, Washington: National Academy of Sciences, National Research Council. Popović, M. (2015), ‘ChrF: Character N-gram F-score for Automatic MT Evaluation’, in Proceedings of the Tenth Workshop on Statistical Machine Translation, 392–95, Lisbon.

Machine Translation: Where Are We at Today?

331

Rumelhart, D.E., G.E. Hinton, and R.J. Williams (1986), ‘Learning Representations by Back-propagating Errors’, Nature, 323: 533–6. Sennrich, R. and B. Haddow (2016), ‘Linguistic Input Features Improve Neural Machine Translation’, in Proceedings of the First Conference on Machine Translation, 83–91, Berlin. Sennrich, R., B. Haddow and A. Birch (2016), ‘Edinburgh Neural Machine Translation Systems for WMT 16’, in Proceedings of the First Conference on Machine Translation, 371–6, Berlin. Shterionov, D., R. Superbo, P. Nagle, L. Casanellas, T. O’Dowd and A. Way (2018), ‘Human vs. Automatic Quality Evaluation of NMT and PBSMT’, Machine Translation, 32 (3): 217–35. Sikes, R. (2007), ‘Fuzzy Matching in Theory and Practice’, Multilingual, 18 (6): 39–43. Snover, M., B. Dorr, R. Schwartz, L. Micciulla and J. Makhoul (2006), ‘A Study of Translation Edit Rate with Targeted Human Annotation’, in AMTA 2006, Proceedings of the 7th Conference of the Association for Machine Translation in the Americas, 223–31, Cambridge. Sutskever, I., O. Vinyals and Q. V. Le (2014), ‘Sequence to Sequence Learning with Neural Networks’, in NIPS 2014: Advances in Neural Information Processing Systems, 3104–12, Montréal. Tillmann, C., S. Vogel, H. Ney, H. Sawaf and A. Zubiaga (1997), ‘Accelerated DP-based Search for Statistical Translation’, in Proceedings of the 5th European Conference on Speech Communication and Technology (EuroSpeech ’97), 2667–70, Rhodes. Vauquois, B. (1968), ‘A Survey of Formal Grammars and Algorithms for Recognition and Transformation in Machine Translation’, in Information Processing 68, Proceedings of IFIP Congress 1968, 254–60, Edinburgh. Wang, L., Z. Tu, X. Zhang, H. Li, A. Way and Q. Liu (2016), ‘A Novel Approach to Dropped Pronoun Translation’, NAACL HLT 2016: Proceedings of The 15th Annual Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, 983–93, San Diego. Way, A. (2009), ‘A Critique of Statistical Machine Translation’, Journal of Translation and Interpreting Studies: Special Issue on Evaluation of Translation Technology, 8: 17–41. Way, A. (2018), ‘Quality Expectations of Machine Translation’, in J. Moorkens, S. Castilho, F. Gaspari and S. Doherty (eds), Translation Quality Assessment: From Principles to Practice, 159–78, Berlin: Springer. Way, A. and M. Hearne (2011), ‘On the Role of Translations in State-of-theArt Statistical Machine Translation’, Language and Linguistics Compass, 5: 227–48. Wu, Y., M. Schuster, Z. Chen, Q. V. Le, M. Norouzi, W. Macherey, M. Krikun, Y. Cao, Q. Gao, K. Macherey, J. Klingner, A. Shah, M. Johnson, X. Liu, L. Kaiser, S. Gouws, Y. Kato, T. Kudo, H. Kazawa, K. Stevens, G. Kurian, N. Patil, W. Wang, C. Young,

332

The Bloomsbury Companion to Language Industry Studies

J. Smith, J. Riesa, A. Rudnick, O. Vinyals, G. Corrado, M. Hughes and J. Dean (2016), ‘Google’s Neural Machine Translation System: Bridging the Gap between Human and Machine Translation’, eprint arXiv:1609.08144 (https://arxiv.org/ abs/1609.08144). Zhang, J., X. Wu, I. Calixto, A. Hosseinzadeh Vahid, X. Zhang, A. Way and Q. Liu (2014), ‘Experiments in Medical Translation Shared Task at WMT 2014’, in Proceedings of WMT 2014: the Ninth Workshop on Statistical Machine Translation, 260–5, Baltimore.

15

Pre-editing and post-editing Ana Guerberof Arenas

1. Introduction This chapter describes pre-editing and post-editing concepts and processes derived from the use of machine translation (MT) in professional and academic translation environments as well as post-editing research methods. It will also summarize in two sections industry and academic research findings; however, a strict separation of this research is somewhat artificial since industry and academia often collaborate closely to gather data in this new field. Although the pre-editing and post-editing (PE) of raw output (see the following section for a description of both editing concepts) have been implemented in some organizations since as early as the 1980s (by the Pan American Health Organization and the European Commission, for example), it is only in the last ten years that MT PE has been increasingly introduced as part of the standard translation workflow in most localization agencies worldwide (Lommel and DePalma 2016a). The introduction of MT technology has caused some disruption in the translation community. This is mainly due to the quality of the output to postedit, on occasions too low to be of benefit, the time allowed for the PE task, with sometimes overly optimistic deadlines, and the price paid for the PE assignment, where frequently a discount on the rate per word is applied. Since this was, and to some extent still is, a recent technology, there is a need to gain more knowledge about how it affects the standard translation workflow and the agents involved in this process. Although this is a continuous endeavour, some key concepts and processes have emerged in recent years. This first section will offer a description of such concepts: controlled language and pre-editing, PE, types of PE (full and light), types of quality expected (human and ‘good enough’) and PE guidelines. It will close with a brief description of an MT output error typology.

334

The Bloomsbury Companion to Language Industry Studies

1.1.  Description of key concepts MT is constantly evolving, and additional terms will be necessarily created. The concepts explained here are therefore derived from current uses of MT technology.

1.1.1.  Controlled language and pre-editing In this context, controlled language refers to certain rules applied when writing technical texts to avoid lexical ambiguity and complex grammatical structures, thus making it easier for the user to read and understand the text and, consequently, easier to apply technology such as translation memories (TMs) or MT. Controlled language focuses mainly on vocabulary and grammar and is intended for very specific domains, even for specific companies (O’Brien 2003). Texts have a consistent and direct style as a result of this process, and they are easier and cheaper to translate. Pre-editing involves the use of a set of terminological and stylistic guidelines or rules to prepare the original text before translation automation to improve the raw output quality. Pre-editors follow certain rules, not only to remove typographical errors and correct possible mistakes in the content but also to write shorter sentences, to use certain grammatical structures (simplified word order and less passive voice, for example) or semantic choices (the use of consistent terminology), or to mark certain terms (product names, for example) that might not require translation. There is software, such as Acrolinx,1 available that helps pre-edit the content automatically by adding terminological, grammatical and stylistic rules to a database that is then run in the source text.

1.1.2.  Post-editing and automatic post-editing To post-edit can be defined as ‘to edit, modify and/or correct pre-translated text that has been processed by an MT system from a source language into (a) target language(s)’ (Allen 2003: 296). In other words, PE means to review a pre-translated text generated by an MT engine against an original source text, correcting possible errors to comply with specific quality criteria. It is important to underline that quality criteria should be defined with the customer before the PE task. Another important consideration is that the purpose of the task is to increase productivity and to accelerate the translation process. Thus, PE also involves discarding MT segments that will take longer to post-edit than to translate from zero (scratch) or to process using an existing TM. Depending on the quality provided by the MT engine and the level of

Pre-Editing and Post-Editing

335

quality expected by the customer, the output might require translating again from scratch (if it is not useful), correcting many errors, correcting a few errors or simply accepting the proposal without any change. Finally, automatic PE (APE) is a process by which the output is corrected automatically before sending the text for human PE or before publication, for example, by using a set of regular expressions based on error patterns found in the MT output or by using data-driven approaches (Chatterjee et al. 2015) with the goal of reducing the human effort on the final text.

1.1.3.  Types of post-editing: Light and full In MT and PE, it is common to classify texts as texts for assimilation – to roughly understand the text in another language – or texts for dissemination – to publish a text for a wide audience into several languages. Depending on this classification, the level of PE will vary. In the first case, assimilation, the text needs to be understandable and accurate, but style is not fundamental and some grammatical and spelling errors are even permitted. In the second case, dissemination, the text needs to be understandable and accurate, but also the style, grammar, spelling and terminology need to be comparable to the standard produced by a human translator. Therefore, PE is also classified according to the degree of editing: full PE – human quality – and rapid or light PE – minimal corrections for text ‘gisting’. Between full and light PE, there might be a wide range of options that are a result of discussions with the end user before the task begins. In other words, PE can be as thorough or as superficial as required, although the degree of PE might be difficult to implement by a human post-editor used to correcting all errors in a text.

1.1.4.  Types of quality expected and post-editing guidelines Establishing the quality expected by the customer will help to determine the price as well as the specific instructions for post-editors. If this is not done, some translators might correct only major errors, thinking that they are obliged to utilize the MT proposal as much as possible, while others will correct major, minor and even acceptable proposals because they feel the text must be as human as possible. In general terms, customers know their users and the type of text they want to produce. In consequence, post-editors should be made aware of the expected quality with specific PE guidelines.

336

The Bloomsbury Companion to Language Industry Studies

The TAUS Guidelines2 are widely followed in the localization industry, with slight changes and adaptations. They distinguish between two types of quality: ‘high-quality human translation’, where full PE is recommended, and ‘good enough’ or ‘fit for purpose’, where light PE is recommended. Way (2013) explains that this dichotomy is too narrow to explain the different uses of MT and the expected quality ranges. He refers to various levels of quality according to the fitness for purpose of translations and the perishability of content; he emphasizes that the degree of PE is directly related to the content lifespan (Way 2013: 2). Whatever the result expected, the quality of the output is key in determining the type of PE: light PE applied to very poor output might result in an incomprehensible text, while light PE applied to very good output might result in a publishable text. Therefore, since the quality of the output is the main determiner of the necessary PE effort, the TAUS PE guidelines suggest focusing on the final quality of the text. The guidelines for good enough quality are as follows: a. b. c. d. e. f. g.

Aim for semantically correct translation. Ensure that no information has been accidentally added or omitted. Edit any offensive, inappropriate or culturally unacceptable content. Use as much of the raw MT output as possible. Apply basic rules regarding spelling. No need to implement corrections that are of a stylistic nature only. No need to restructure sentences solely to improve the natural flow of the text.

While for human quality, they are as follows: a. Aim for grammatically, syntactically and semantically correct translation. b. Ensure that key terminology is correctly translated and that untranslated terms belong to the client’s list of “Do Not Translate” terms. c. Ensure that no information has been accidentally added or omitted. d. Edit any offensive, inappropriate or culturally unacceptable content. e. Use as much of the raw MT output as possible. f. Apply basic rules regarding spelling, punctuation and hyphenation. g. Ensure that formatting is correct. These are general guidelines; the specific guidelines will depend on the project specifications: the language combination, the type of engine, the quality of output,

Pre-Editing and Post-Editing

337

the terminology, etc. It is necessary that the guidelines are language specific to provide real examples to post-editors of what can or cannot be changed. The guidelines could cover the following areas: a. Description of the type of engine used. b. Description of the source text, its type and structure. c. Brief description of the quality of output for that language combination. This could include automatic scores or human evaluation scores. d. Expected final quality to be delivered to the customer. e. Scenarios for when to discard segments, so that post-editors should have an idea of how much time to spend to ‘recycle’ a segment or whether to discard it altogether. f. Typical types of errors for that language combination that should be corrected, including reference to difficult areas like tagging, links or omissions. g. Changes to be avoided in accordance with the customer’s expectations, for example, certain stylistic changes that might not be necessary. h. How to deal with terminology. The terminology provided by MT could be perfect or it could be obsolete or a combination of the two. The advantages of PE, in terms of productivity, could be offset by terminological changes if the engine has been trained with old terminology or if the engine is not delivering accurate terms.

1.1.5.  MT output error typology There are many MT error classifications, the aim of which is not only to improve MT output by providing feedback to the engines but also to raise awareness among post-editors of the types of errors they will need to correct. If they know the types of errors frequently found in that output before performing this task, it is easier for post-editors to spot and correct them. Moreover, unnecessary changes are avoided and less frustration is generated when errors that are unusual for human translations are found. In the last twenty years, there have been different error classifications depending on the type of engine, language pair and aim of the study. For example, Laurian (1984); Loffler-Laurian (1983); Krings (2001); Schäffer (2003); Vilar et al. (2006) and Farrús et al. (2011) offer very extensive error classifications adapted to the type of engine and language combination in use with the aim of understanding in which linguistic areas the machine encounters problems.

338

The Bloomsbury Companion to Language Industry Studies

However, these lists, while useful for the purpose intended, are not practical for the evaluation of MT errors in commercial environments. Moreover, as the quality of the MT output improves, and MT becomes fully integrated in the standard localization workflow, error typologies like the ones applied to human translations are being used, such as the LISA Quality Model,3 SAE J24504 or BS EN15038.5 Recently, and in a similar vein, TAUS has created its Dynamic Quality Evaluation Model6 (see also Görög 2014), which is widely used in the localization industry and for PE research. The model allows users to measure MT productivity, rank MT engines, evaluate adequacy and fluency and/or classify output errors. Finally, the Multidimensional Quality Metrics (MQM)7 is an error typology metric that was developed as part of the EU-funded QTLaunchPad project based on the examination and extension of more than one hundred existing quality models. These frameworks have common error classifications: Accuracy (referring to misunderstandings of the source text), Language (referring to grammar and spelling errors), Terminology (referring to errors showing divergence from the customer’s glossaries), Style (referring to errors showing divergence from the customer’s style guide), Country Standards (referring to errors related to dates, currency, number formats and addresses) and Format (referring to errors in page layout, index entries, links and tagging). Severity levels (from 1 to 3, for example) can be applied according to the importance of the error found. These frameworks can be customized to suit a project, specific content, an LSP or a customer. Recently, a new metric for error typology has been developed based on the harmonization of the MQM and DQF frameworks, and available through an open TAUS DQF API. This harmonization allows errors to be classified, firstly, according to a broader DQF error typology and, subsequently, by the subcategories as defined in MQM.

2.  Research focal points This section will give an overview of the research carried out in this field. It will start with a description of methods used to gather data, both quantitative and qualitative, such as screen recording, keyboard logging, eye-tracking, thinkaloud protocols, interviews, questionnaires and specifically designed software. Following this description, the principal areas of research will be discussed.

Pre-Editing and Post-Editing

339

2.1.  Research methods In the last decade, there has been an increase in PE research as the commercial demand for this service has increased (O’Brien and Simard 2014: 159). As PE was a relatively new area of study, there were many factors to analyse that intended to answer if MT, at its core, was a useful tool for translators, and if by using this tool translators would be faster while maintaining the same quality. At the same time, new methods were needed to gather relevant data since all actions were happening ‘inside a computer’ and ‘inside the translators’ brain’. Therefore, methods were sometimes created specifically for translation studies (TS) and, at other times, methods from related areas of knowledge (primarily psychology and cognitive sciences) were applied experimentally to translation. This is a summary of the methods used for PE research.

2.1.1.  Screen recording With this method, software installed on a computer can record all screen and audio activity and creates a video file for subsequent analysis. Screen recording is used in PE experiments so that post-editors can use their standard tools (rather than an experimental tool designed specifically for research) and at the same time record their actions. Some examples of screen recording in PE research can be found in De Almeida and O’Brien 2010; Teixeira 2011 and Läubli et al. 2013. There are several software applications available in the market such as CamStudio,8 Camtasia Studio9 and Flashback.10 These applications change and improve as new generations of computers come onto the market.

2.1.2.  Keyboard logging There are many software applications that allow the computer to register the users’ mouse and keyboard actions. Translog-II11 is a software program created by CRITT12 that is used to record and study human reading and writing processes on a computer; it is extensively used in translation process research. Translog-II produces log files which contain user activity data of reading, writing and translation processes, and which can be evaluated by external tools (Carl 2012).

2.1.3. Eye-tracking This is a method that allows researchers to measure the user’s eye movements and fixations on a screen using a light source and a camera generally integrated in a computer to capture the pupil centre corneal reflection (PCCR). The data

340

The Bloomsbury Companion to Language Industry Studies

gathered is written to a file that can be then read with eye-tracking software. Eye data (fixations and gaze direction) can indicate the cognitive state and load of a research participant. One of the most popular software applications available on the market is Tobii Studio Software,13 used for the analysis and visualization of data from screen-based eye trackers. Eye-tracking has been used in readability, comprehension, writing, editing and usability research. More importantly, it is regarded as an adequate tool to measure cognitive effort in MT and PE studies (Doherty and O’Brien 2009; Doherty, O’Brien and Carl 2010) and has been used (and is being used) in numerous studies dealing with the PE task (Alves et al. 2016; Carl et al. 2011; Doherty and O’Brien 2014; O’Brien 2006, 2011; Vieira 2016).

2.1.4.  Think-aloud protocols Think-aloud protocols (TAP) have been used in psychology and cognitive sciences, and then applied to TS to acquire knowledge on translation processes from the point of view of practitioners. They involve translators thinking aloud as they perform a set of specified tasks. Participants are asked to say what comes into their mind as they complete the task. This might include what they are looking at, thinking, doing and feeling. TAPs can also be post-hoc; that is, the participants explain their actions after seeing an on-screen recording or gaze replay of their task. This is known as retrospective think-aloud (RTA) or retrospective verbal protocols (RVP). All verbalizations are transcribed and then analysed. TAPs have been used extensively to define translation processes (Jääskeläinen and Tirkonnen Condit 1991; Lörscher 1991; Krings 2001; Kussmaul and Tirkonnen Condit 1995; Séguinot 1996). Over time, some drawbacks have been identified regarding TAPs, mainly that by having to explain the actions, the time to complete these actions is obviously altered, the fact that professional translators think faster than their verbalization and that the act of verbalizing the actions might affect cognitive processes (Jakobsen 2003; O’Brien 2005). However, this is not to say that TAPs have been discarded as a research tool. Vieira (2016), for instance, shows that TAPs correlate with eye movements and subjective ratings as measures of cognitive effort.

2.1.5. Interviews Researchers use interviews to elicit information through questions from translators. This technique is used in PE research especially to find out what

Pre-Editing and Post-Editing

341

translators think about a process, a tool or their working conditions (e.g. in Guerberof 2013; Moorkens and O’Brien 2013; Sanchís-Trilles et al. 2014; Teixeira 2014). Evidently, this does not necessarily mean that the interviewees say what they do or what they really believe in, but interviews are especially useful when using a mixed method approach to supplement hard data with translators’ opinions and thoughts. This technique is increasingly used in TS and thus also in PE research as ‘the discipline expands beyond the realm of linguistics, literature and cultural studies, and takes upon itself the task of integrating the sociological dimension of translation’ (Saldanha and O’Brien 2013: 168); for example, by shifting the focus from texts and translation processes to status in society, working spaces and conditions, translators’ opinions and perceptions of technology, to name but a few. Interviews, like questionnaires, require careful planning to obtain meaningful data. As with other methods, there is software to help researchers transcribe interviews (such as Google Docs, Dedoose,14 Dragon15), and to then carry out qualitative analysis by coding the responses (NVivo,16 MAXQDA,17 ATLAS.ti18 and Dedoose).

2.1.6. Questionnaires Another way of eliciting information from participants in PE research is to use questionnaires. As with interviews, creating a solid questionnaire is not an easy task, and it needs to be carefully researched, planned and tested. Questionnaires have been used extensively in PE research to find out, through open or closed questions, more about a translator’s profile (gender, age, background and experience, for example), to gather information after the actual experiment on opinions about a tool or to know a participant’s attitude towards or perception of MT. The results can be used to clarify quantitative data (see Castilho et al. 2014; De Almeida 2013; Gaspari et al. 2014; Guerberof 2009; Krings 2001; MesaLao 2014; Sanchís-Trilles et al. 2014). Google Forms or Survey Monkey19 are two of the applications frequently used because they facilitate data distribution, gathering and analysis.

2.1.7.  Online tools for post-editing Several tools have been developed as part of academic projects that were designed specifically to measure PE activities or to facilitate this task for professional post-editors. These include CASMACAT (Ortiz-Martínez et al. 2012), MateCat (Federico, Cattelan and Trombetti 2012), Appraise (Federmann 2012), iOmegaT (Moran 2012), TransCenter (Denkowski and Lavie 2012),

342

The Bloomsbury Companion to Language Industry Studies

PET Post-Editing Tool (Aziz et al. 2012), ACCEPT Post-Editing Environment (Roturier et al. 2013) and Kanjingo (O’Brien, Moorkens and Vreeke 2014). As we saw in Section 1.1.5., there are also private initiatives that have led to the creation of tools used in PE tasks that have been also used in research, such as the TAUS Dynamic Quality Framework (Görög 2014). As well as these tools, standard CAT (Computer-Assisted Translation) tools such as SDL Trados,20 Kilgray MemoQ,21 MemSource22 or Lilt23 are also used for research, especially if the objective is to have a working environment as close as possible to that of a professional post-editor. Moreover, these tools have implemented the use of APIs that allow users to connect to the MT engine of their choice, bringing MT closer to the translators and allowing them to integrate MT output seamlessly into their workflows.

2.2.  Research areas Until very recently, a lot of information about PE was company specific, since companies carried out their own internal research using their own preferred engines, processes, PE guidelines and desired final target text quality. This information stayed within the company as it was confidential. TS as a discipline was not particularly interested in the world of computer-aided translation or MT until around the early 2000s. Between the 1980s and 1990s, there were several articles published with information on MT implementation in different organizations describing the different tasks, processes and profiles in PE (Senez 1998; Vasconcellos 1986, 1989, 1992, 1993; Vasconcellos and León 1985; Wagner 1985, 1987) and specifying the various levels of PE (Loffler-Laurain 1983). These articles were descriptive in nature, as they intended to explain how the implementation of MT worked within a translation cycle. They also gave some indication of the productivity gains under certain restricted workflows. The most extensive research on PE at the time was carried out by Krings in 2001. In his book Repairing Texts, Krings focuses on the mental processes used in MT PE using TAPs. He defines and tests temporal effort (words processed in a given time), cognitive effort (post-editors’ mental processing) and technical effort (post-editors’ physical actions, such as keyboard or mouse activities) in a series of tests. Since this initial phase, interest in MT and PE has grown exponentially, focusing mainly on the following topics: controlled authoring; pre-editing and PE effort – the effort here can be temporal, technical or cognitive; quality of postedited material versus quality of human translated material; PE effort versus

Pre-Editing and Post-Editing

343

TM-match effort and translation effort – the effort here can also be temporal (to discover more about productivity), technical or cognitive; the role of experience in PE: experienced versus novice translators’ performance in relation to quality and productivity; correlations between automatic evaluation metrics (AEM) and PE effort; MT confidence estimation (CE); interactive translation prediction (ITP); monolingual versus multilingual post-editors in community-based PE (with user-generated content); usability and PE; and MT and PE training.

3.  Informing research through the industry In the early 1990s, the localization industry needed new avenues to deliver translated products for a very demanding market that required bigger volumes of words, a higher number of languages with faster delivery times and lower buying prices. Because the content was often repetitive – translators would recycle content by cutting and pasting – the idea of creating databases of bilingual data (TMs) began taking shape (Arthern 1979; Kay 1980/97; Melby 1982), before Trados GmbH commercialized the first versions of Trados for DOS and Multiterm for Windows 3.1 in 1992. At the same time, and as mentioned in the previous section, institutions and private companies were implementing MT to either increase the speed or lower the costs of their translations while maintaining quality. Seminal articles appeared that paved the way to future research on PE (see ‘Research areas’ above). Once the use of TMs was fully functional in the localization workflow, further avenues to produce more translations, in less time and at lower costs while maintaining quality, were also explored. In the mid-2000s, companies like Google, Microsoft, IBM, Autodesk, Adobe and SAP, to name but a few, not only created and contributed to the advance in MT technology (see Section 2) but also designed new translation workflows with the dedicated support of their language service providers (LSPs). These new workflows included pre-editing and PE of raw output in different languages, the creation of new guidelines to work in this environment and the training of translators to include this technology without disrupting production or release dates. Moreover, these changes impacted on how translators were paid. Since then, many companies have presented results from the work done internally using MT in combination with PE (or without PE) at conferences such as Localization World, GALA and the TAUS forums, or at more specialized

344

The Bloomsbury Companion to Language Industry Studies

conferences such as AMTA, EAMT or the MT Summit. Often, because of confidentiality issues, the articles that have subsequently appeared are not able to present detailed data to explain the results; in other cases, measurements have been taken in live settings with too many variables to properly isolate causes. Nevertheless, the companies involved have access to big data with high volumes and many languages from real projects to test the use of MT and PE; the results are therefore highly relevant in so far as they represent the reality of the industry. In general, companies seek information on productivity (with a specific engine, language or type of content), quality (with a focus on their own needs or their customers’ needs if they are LSPs), post-editor profiling, systems improvement or MT ecosystems. Some relevant published articles are those by Adobe (Flournoy and Duran 2009), Autodesk (Plitt and Masselot 2010; Zhechev 2012), Continental Airlines (Beaton and Contreras 2010), IBM (Roukos et al. 2011), Sybase (Bier and Herranz 2011), PayPal (Beregovaya and Yanishevsky 2010), CA (Paladini 2011) and WeLocalize (Casanellas and Marg 2014; O’Curran 2014). These articles report on high productivity gains in the translation cycle thanks to the use of MT and PE while not compromising the final quality. They reflect on the diversity of results depending on the engine, language combination, translator and content. Initial reports came from large corporations or government agencies, understandably since the initial investment at the time was high, but more recently, LSPs have also been reporting positive findings on the use of the technology as part of their own localization cycle. As well as large companies, consultancy firms have shed some light on the PE task and on the use of MT within the language industry. These consultancy companies have used data and standard processes to support business decisions made in the localization industry, sometimes in collaboration with academia. This is the case of TAUS, a community of users and providers of translation technologies and services that have worked intensively, as mentioned above, on defining a common quality framework to evaluate MT output (see DQF). Common Sense Advisory (CSA), a research and consulting firm, has also published several articles about PE (De Palma 2013), MT technology, pricing models, MT technology providers (Lommel and De Palma 2016b), neural MT (Lommel 2017) and surveys on the use of PE in LSPs (Lommel and De Palma 2016a). The aim is to provide information for large and small LSPs on how to implement MT and to gather common practices and offer recommendations on PE. At the same time, they provide academia with information on the state-of-the-art in MT and PE in the localization industry. Recently, Slator,24 a company that provides business intelligence for the language market through a

Pre-Editing and Post-Editing

345

website, publications and conferences, has released a report on neural machine translation (NMT).

4.  Informing the industry through research The language industry often lacks the time to invest in research, and, as mentioned in the previous section, their measurements are taken in live settings with so many variables that conclusions might be clouded. Therefore, it is essential that academic research focuses on aspects of the PE task where more time is available and a scientific method can be applied. Academic researchers have frequently worked together with industry to analyse in depth how technology shapes translation processes, collaborating not only with MT providers and LSPs but also with freelancers and users of technical translations. The bulk of the research presented here has therefore not been carried out in an isolated academic setting but together with commercial partners. The research presented below is necessarily incomplete and subjective as it is difficult to present all the research on PE done in the last ten years given the sharp increase in the number of publications. In the following, we therefore present a summary of the findings of those studies considered important for the community, innovative at the time or to be stepping stones for future research. With respect to pre-editing and controlled language, studies indicate that, while pre-editing activities reduce the PE effort in the context of specific languages and engines, these changes are not always easy to implement, that there is an initial heavy investment and that, sometimes, PE is still needed after the pre-editing has been performed (Aikawa et al. 2007; Bouillon et al. 2014; Gerlach 2015; Gerlach et al. 2013; Miyata et al. 2017; O’Brien 2006; Temnikova 2010). Therefore, if pre-editing is to be applied to an actual localization project, careful evaluation of the specific project is necessary to establish if the initial investment on the pre-editing task will have a real impact on the PE effort. A close look at PE in practice shows that the combination of MT and PE increases translators’ productivity (Aranberri et al. 2014; Carl et al. 2011; De Sousa, Aziz and Specia 2011; De Sutter and Depraetere 2012; Federico, Cattelan and Trombetti 2012; García 2010, 2011; Green, Heer and Manning 2013; Guerberof 2012; Läubli et al. 2013; O’Brien 2006; Ortíz and Matamala 2016; Parra and Arcedillo 2015b; Tatsumi 2010) but this is often applicable to very specific environments with highly customized engines, to suitable content, to certain language combinations, to specific guidelines and/or to suitably trained translators and to those with open

346

The Bloomsbury Companion to Language Industry Studies

attitudes to MT. It is important to note the high inter-subject variability when looking at post-editors’ productivity. This is relevant because it signals the difficulty of establishing standards for measuring time, which adds to the complexity of individual pricing as opposed to a general discount per project or language. Even if productivity increases, an analysis of the quality of the product is needed. Studies have shown that reviewers see very little difference between the quality of post-edited segments (edited to human quality) and human translated segments (Carl et al. 2011; De Sutter and Depraetere 2012; Fiederer and O’Brien 2009; Green, Heer and Manning 2013; Guerberof 2012; Ortíz and Matamala 2016), and in some cases the quality might be higher with MT. In view of high inter-subject variability, other variables have been examined to explain this phenomenon; for example, experience is always identified as an important variable when productivity and quality are measured. However, there is no conclusive data when it comes to experience. It seems that post-editors work in diverse ways not only regarding the time, type and number of changes they make but also according to experience (Aranberri et al. 2014; De Almeida 2013; De Almeida and O’Brien 2010; Guerberof 2012; Moorkens and O’Brien 2015). In some cases, professionals work faster and achieve higher quality. In others, experience is not a factor. It has also been noted that novice translators might have an open attitude towards MT that could facilitate training and acceptance of MT projects. Experienced translators, on the other hand, might do better in following PE guidelines, work faster and produce higher final quality, although this is not always the case. Since TMs have been fully implemented in the language industry and prices have long been established, the correlation between MT output and fuzzy matches from TMs have also been studied to see if MT prices could be somehow mirrored (Guerberof 2009, 2012; O’Brien 2006; Parra and Arcedillo 2015a; Tatsumi 2010). The results suggest that there are correlations between MT output and TM segments above 75 per cent in terms of productivity. It is surprising that overall processing of MT matches seems to correlate with high fuzzy matches rather than with low fuzzy matches. However, the fact that CAT tools highlight the required changes in TM matches, and not in the MT proposals, facilitates the work done by translators when using TMs. In a comparable way, the correlation between AEM and PE effort has been examined to see if, by analysing the output automatically, it is easier to infer the post-editor’s effort. This correlation, however, seems to be more accurate globally than on a per segment basis or for certain types of sentences, such as longer ones (De Sutter and Depraetere 2012; Guerberof 2012; O’Brien 2011; Parra and

Pre-Editing and Post-Editing

347

Arcedillo 2015b; Tatsumi 2010; Vieira 2014), and there are also discrepancies in matching automatic scores with actual productivity levels. Therefore, these metrics, although they can give an indication of the PE effort, might be difficult to apply in a professional context. To bridge this gap and simulate the fuzzy match scores that TMs offer, there have been several studies on MT confidence estimations (CEs) (He et al. 2010; Huang et al. 2014; Specia 2011; Specia et al. 2009a, 2009b; Turchi et al. 2015). CE is a mechanism by which post-editors are informed about the quality of the output in a comparable way to that of TMs. It enables translators to determine rapidly whether an individual MT proposal will be useful, or if it would be more productive to ignore it and translate from scratch, thus enhancing productivity. The results show that CEs can be useful to speed up the PE process, but this is not always the case. Translators’ variability regarding productivity, the fact that these scores might not be fully accurate due to the technical complexity and the content itself have made it difficult to fully implement CE in a commercial context. Another method created to aid post-editors is ITP (Alves et al. 2016; PérezOrtiz, Torregrosa and Forcada 2014; Sanchís-Trilles et al. 2014; Underwood et al. 2014). ITP helps post-editors in their task by changing the MT proposal according to the textual choices made in real time. Although studies show that the system does not necessarily increase productivity, it can reduce the number of keystrokes and cognitive effort without impacting on quality. Even if productivity increases while quality is maintained, actual experience shows that PE is a tiring task for translators. Therefore, PE and cognitive effort have been explored from different angles. There are studies that highlight the fact that PE requires higher cognitive effort than translation and that describe the cognitive complexity in PE tasks (Krings 2001; O’Brien 2006, 2017). There is also research that explores which aspects of the output or the source text require a higher cognitive effort and are therefore problematic for post-editors. This effort seems to vary according to the target-language structure; more instances of high effort are noted under these categories: incorrect syntax, word order, mistranslations and mistranslated idioms (Daems et al. 2015, 2017; Koponen et al. 2012; Lacruz and Shreve 2014; Lacruz, Shreve and Angelone 2012; Popovic et al. 2014; Temnikova 2010; Temnikova et al. 2016; Vieira 2014). Users of MT, however, are not only translators. Some research looks at monolingual PE and finds that it can lead to improved fluency and comprehensibility scores like those achieved through bilingual PE; fidelity,

348

The Bloomsbury Companion to Language Industry Studies

however, improved considerably more with bilingual post-editors. As is usual in this type of research, performance across post-editors varies greatly (Hu, Bederson and Resnik 2010; Koehn 2010; Mitchell, Roturier and O’Brein 2013; Mitchell, O’Brien and Roturier 2014; Nietzke 2016). MT and PE can be a useful tool for lay users with an appropriate set of recommendations and best practices. At the same time, even bilingual PE by lay users can help bridge the information gap in certain environments, such as that of health organizations (Laurenzi et al. 2013). MT usability and PE has not been extensively explored yet, although there is relevant work that indicates that usability increases when users read the original text or even a lightly post-edited text as opposed to reading raw MT output (Castilho 2016; Castilho et al. 2014; Doherty and O’Brien 2012, 2014; O’Brien and Castilho 2016). However, users can complete most tasks by using the raw MT output even if the experience is less satisfactory, though the research results again vary considerably depending on the language and, therefore, the quality of the raw output. Regarding MT and PE training for translators, the skills needed have been described by O’Brien (2002), Rico and Torrejón (2012) and Pym (2013), while syllabi have been designed, and courses explained and described, by Doherty, Kenny and Way (2012), Doherty and Moorkens (2013), Doherty and Kenny (2014), Koponen (2015) and Mellinger (2017). The suggestions made include teaching basic MT technology concepts, MT evaluation techniques, Statistical MT (SMT) training, pre-editing and controlled language, monolingual PE, understanding various levels of PE (light and full), creating guidelines, MT evaluation, output error identification, when to discard unusable segments and continuous PE practice.

5.  Concluding remarks As this chapter has shown, research in PE is an area of considerable interest to academia and industry. As MT technology changes, for example, through the development of NMT, findings need to be revisited to test how PE effort is affected and, indeed, if PE is necessary at all for certain products or purposes (e.g. when dealing with forums, chats and knowledge bases). Logic would indicate that, as MT engines improve, the PE task for translators will become ‘easier’; that is, translators would be able to process more words in less time and with a lower technical and cognitive effort. However, this will vary depending on

Pre-Editing and Post-Editing

349

the engine, the language combination and the domain. SMT models have been customized and long implemented in the language industry, with adjustments for specific workflows (customers, content and language combinations). It is still early days for NMT from a PE perspective, and even from a developer’s perspective NMT is still in its infancy. Results have so far been encouraging (Bentivogli et al. 2016; Toral, Wieling and Way 2018), though inconsistent (Castilho et al. 2017), when looking at NMT post-editing and human evaluation of NMT content. The inconsistent results are precisely due to SMT engine customizations, language combinations, domains and the way NMT processes certain segments that might sound fluent but contain serious errors (omissions, additions and mistranslations) that are less predictable than in SMT. Therefore, further collaboration between academia and industry will be needed to gain more knowledge of SMT versus NMT performance, NMT error typology, NMT treatment of long sentences and terminology, NMT integration in CAT tools, NMT for low-resource languages and NMT acceptability to translators as well as end users, to mention just some areas of special interest. Far from the hype of perfect MT quality,25 evidence to date shows that to deliver human quality translations using MT, human intervention is still very much needed.

Funding This article was partially written under the Edge Research Fellowship programme that has received funding from the European Union’s Horizon 2020 Research and Innovation Programme under Marie Sklodowska-Curie grant agreement No. 713567, and by the ADAPT Centre for Digital Content Technology, funded under the SFI Research Centres Programme (Grant 13/RC/2106) and co-funded under the European Regional Development Fund.

Notes 1 https://www.acrolinx.com/ 2 www.translationautomation.com/joomla/ 3 Localization Industry Standards Association (LISA). http://dssresources.com/ news/1558.php 4 Society of Automotive Engineers Task Force on Translation Quality Metric. http:// www.sae.org/standardsdev/j2450p1.htm 5 EN-15038 European quality standard http://qualitystandard.bs.en-15038.com

350

The Bloomsbury Companion to Language Industry Studies

6 Dynamic Quality Framework (TAUS). https://dqf.taus.net/ 7 Multidimensional Quality Metrics. http://www.qt21.eu/launchpad/content/ multidimensional-quality-metrics 8 http://camstudio.org/ 9 https://camtasia-studio.softonic.com/ 10 https://www.flashbackrecorder.com/ 11 https://sites.google.com/site/centretranslationinnovation/translog-ii 12 Center for Research and Innovation in Translation and Translation Technology. 13 https://www.tobiipro.com/product-listing/tobii-pro-studio/ 14 http://www.dedoose.com/ 15 http://www.nuance.es/dragon/index.htm 16 http://www.qsrinternational.com/nvivo-spanish 17 http://www.maxqda.com/ 18 http://atlasti.com/ 19 https://www.surveymonkey.com/ 20 http://www.sdl.com/software-and-services/translation-software/sdl-trados-studio/ 21 https://www.memoq.com/en/ 22 https://www.memsource.com/ 23 https://lilt.com/ 24 https://slator.com/ 25 https://blogs.microsoft.com/ai/machine-translation-news-test-set-humanparity/?wt.mc_id=74788-mcr-tw

References Aikawa, T., L. Schwartz, R. King, M. Corston-Oliver and C. Lozano (2007), ‘Impact of Controlled Language on Translation Quality and Post-editing in a Statistical Machine Translation Environment’, in Proceedings of the MT summit XI, 1–7, Copenhagen. Allen, J. (2003), ‘Post-editing’, in Somers H. (ed.), Computers and translation: A translator’s guide, 297–317, Amsterdam: John Benjamins. Alves, F., A. Koglin, B. Mesa-Lao, M. G. Martínez, N. B. de Lima Fonseca, A. de Melo Sá, J. L. Gonçalves, K. Sarto Szpak, K. Sekino and M. Aquino (2016), ‘Analysing the Impact of Interactive Machine Translation on Post-editing Effort’, in M. Carl, S. Bangalore and M. Schaeffer (eds), New Directions in Empirical Translation Process Research, 77–94, Cham: Springer. Aranberri, N., G. Labaka, A. Diaz de Ilarraza and K. Sarasola (2014), ‘Comparison of Post-editing Productivity Between Professional Translators and Lay Users’, in S. O’Brien, M. Simard and L. Specia (eds), Proceedings of the 3rd Workshop on Postediting Technology and Practice, 20–33, Vancouver: AMTA.

Pre-Editing and Post-Editing

351

Arthern, P. J. (1979), ‘Machine Translation and Computerized Terminology Aystems: A Translator’s Viewpoint’, B. M. Snell (ed.), Translating and the Computer: Proceedings of a Seminar, 77–108, London: North-Holland. Aziz, W., S. Castilho and L. Specia (2012), ‘PET: A Tool for Post-editing and Assessing Machine Translation’, in Proceedings of the Eight International Conference on Language Resources and Evaluation (LREC’12), 3982–3987, Istanbul: European Language Resources Association (ELRA). Beaton, A. and G. Contreras (2010), ‘Sharing the Continental Airlines and SDL Postediting Experience’, in Proceedings of the 9th Conference of the AMTA, Denver: AMTA. Bentivogli, L., A. Bisazza, M. Cettolo and M. Federico (2016), ‘Neural Versus Phrase-based Machine Translation Quality: A Case Study’, arXiv preprint arXiv:1608.04631. Beregovaya, O. and A. Yanishevsky (2010), ‘PROMT at PayPal: Enterprise-scale MT Deployment for Financial Industry Content’, in Proceedings of the 9th Conference of the AMTA, Denver: AMTA. Bier, K. and M. Herranz (2011), ‘MT Experience at Sybase’, in Localization World Conference, Barcelona. Bouillon, P., L. Gaspar, J. Gerlach, V. Porro Rodriguez and J. Roturier (2014), ‘Preediting by Forum Users: A Case Study’, Workshop (W2) on Controlled Natural Language Simplifying Language Use, in Proceedings of the Ninth International Conference on Language Resources and Evaluation (LREC’14), 3–10, Reykjavik: European Language Resources Association (ELRA). Carl, M. (2012), ‘Translog-II: A Program for Recording User Activity Data for Empirical Reading and Writing Research’, in Proceedings of the Eight International Conference on Language Resources and Evaluation (LREC’12), 4108–12, Istanbul: European Language Resources Association (ELRA). Carl, M., B. Dragsted, J. Elming, D. Hardt and A. L. Jakobsen (2011), ‘The Process of Post-editing: A Pilot Study’, in Proceedings of the 8th International NLPSC Workshop. Special Theme: Human-machine Interaction in Translation, Vol. 41, 131–42, Copenhagen: Copenhagen Studies in Language. Casanellas, L. and L. Marg (2014), ‘Assumptions, Expectations and Outliers in Post-editing’, in Proceedings of the 17th Annual Conference of the EAMT, 93–7, Dubrovnik: EAMT. Castilho, S. (2016), ‘Measuring Acceptability of Machine Translated Enterprise Content’, Doctoral thesis, Dublin City University, Dublin. Castilho, S., S. O’Brien, F. Alves and M. O’Brien (2014), ‘Does Post-editing Increase Usability? A Study with Brazilian Portuguese as Target Language’, in Proceedings of the 17th Conference of the EAMT, 183–90, Dubrovnik: EAMT. Castilho, S., J. Moorkens, F. Gaspari, I. Calixto, J. Tinsley and A. Way (2017), ‘Is Neural Machine Translation the New State of the Art?’, The Prague Bulletin of Mathematical Linguistics, 108 (1): 109–20.

352

The Bloomsbury Companion to Language Industry Studies

Chatterjee, R., M. Weller, M. Negri and M. Turchi (2015), ‘Exploring the Planet of the Apes: A Comparative Study of State-of-the-Art Methods for MT Automatic Post-editing’, in Proceedings of the 53rd Annual Meeting of the ACL and the 7th International Joint Conference on NLP, 2, 156–61, Beijing: ACL. Daems, J., S. Vandepitte, R. Hartsuiker and L. Macken (2015), ‘The Impact of Machine Translation Error Types on Post-editing Effort Indicators’, in S. O’Brien, M. Simard and L. Specia (eds) Proceedings of the 4th Workshop on Post-editing Technology and Practice (WPTP4), 31–45, Miami: AMTA. Daems, J., S. Vandepitte, R. J. Hartsuiker and L. Macken (2017), ‘Identifying the Machine Translation Error Types with the Greatest Impact on Post-editing Effort’, Frontiers in Psychology, 8: 1282. Available online: https://www.frontiersin.org/ articles/10.3389/fpsyg.2017.01282/full (accessed 21 September 2018). De Almeida, G. (2013), ‘Translating the Post-editor: An Investigation of Post-editing Changes and Correlations with Professional Experience Across two Romance Languages’, Doctoral thesis, Dublin City University, Dublin. De Almeida, G. and S. O’Brien (2010), ‘Analysing Post-editing Performance: Correlations with Years of Translation Experience’, in Proceedings of the 14th Annual Conference of the EAMT, 27–8, St. Raphaël: EAMT. De Palma, D. A. (2013), ‘Post-edited Machine Translation Defined’, Common Sense Advisory. Available online: http://www.commonsenseadvisory.com/AbstractView/ tabid/74/ArticleID/5499/Title/Post-EditedMachineTranslationDefined/Default.aspx (accessed 21 September 2018). De Sousa, S. C., W. Aziz and L. Specia (2011), ‘Assessing the Post-editing Effort for Automatic and Semi-automatic Translations of DVD Subtitles’, in G. Angelova, K. Bontcheva, R. Mitkov and N. Nikolov (eds), Proceedings of RANLP, 97–103, Hissar. De Sutter, N. and I. Depraetere (2012), ‘Post-edited Translation Quality, Edit Distance and Fluency Scores: Report on a Case Study’, Presentation in Journée d’études Traduction et qualité Méthodologies en matière d’assurance qualité, Universtité Lille 3, Sciences humaines et sociales, Lille. Denkowski, M. and A. Lavie (2012), ‘TransCenter: Web-based Translation Research Suite’, in AMTA 2012 Workshop on Post-editing Technology and Practice Demo Session, San Diego: AMTA. Doherty, S. and D. Kenny (2014), ‘The Design and Evaluation of a Statistical Machine Translation Syllabus for Translation Students’, The Interpreter and Translator Trainer, 8 (2): 295–315. Doherty, S. and J. Moorkens (2013), ‘Investigating the Experience of Translation Technology Labs: Pedagogical Implications’, Journal of Specialised Translation, 19: 122–36. Doherty, S. and S. O’Brien (2009), ‘Can MT Output be Evaluated through Eye Tracking?’, in Proceedings of the MT Summit XII, 214–21, Ottawa: AMTA.

Pre-Editing and Post-Editing

353

Doherty, S. and S. O’Brien (2012), ‘A User-based Usability Assessment of Raw Machine Translated Technical Instructions’, in Proceedings of the 10th Biennial Conference of the AMTA, San Diego: AMTA. Doherty, S. and S. O’Brien (2014), ‘Assessing the Usability of Raw Machine Translated Output: A User-centered Study Using Eye Tracking’, International Journal of Human-Computer Interaction, 30 (1): 40–51. Doherty, S., D. Kenny and A. Way (2012), ‘Taking Statistical Machine Translation to the Student Translator’, in Proceedings of the 10th Biennial Conference of the AMTA, San Diego: AMTA. Doherty, S., S. O’Brien and M. Carl (2010), ‘Eye Tracking as an MT Evaluation Technique’, Machine Translation, 24 (1): 1–13, Netherlands: Springer. Farrús, M., M. R. Costa-Jussa, J. B. Marino, M. Poch, A. Hernández, C. Henríquez and J. A. Fonollosa (2011), ‘Overcoming Statistical Machine Translation Limitations: Error Analysis and Proposed Solutions for the Catalan–Spanish Language Pair’, Language Resources and Evaluation, 45 (2): 181–208, Netherlands: Springer. Federico, M., A. Cattelan and M. Trombetti (2012), ‘Measuring User Productivity in Machine Translation Enhanced Computer Assisted Translation’, in Proceedings of the 10th Conference of the AMTA, 44–56, Madison: AMTA. Federmann, C. (2012), ‘Appraise: An Open-source Toolkit for Manual Evaluation of MT Output’, The Prague Bulletin of Mathematical Linguistics, 98: 25–35. Fiederer, R. and S. O’Brien (2009), ‘Quality and Machine Translation: A Realistic Objective? ’, The Journal of Specialised Translation 11: 52–74. Flournoy, R. and C. Duran (2009), ‘Machine Translation and Document Localization at Adobe: From Pilot to Production’, in Proceedings of the 12th MT Summit, Ottawa. García, I. (2010), ‘Is Machine Translation Ready Yet?’, Target, (22–1), 7–2, Amsterdam: John Benjamins. García, I. (2011), ‘Translating by Post-editing: Is it the Way Forward?’, Machine Translation, 25(3): 217–37, Netherlands: Springer. Gaspari, F., A. Toral, S. K. Naskar, D. Groves and A. Way (2014), ‘Perception vs Reality: Measuring Machine Translation Post-editing Productivity’, in S. O’Brien, M. Simard and L. Specia (eds), Proceedings of the 3rd Workshop on Post-editing Technology and Practice, 60–72, Vancouver: AMTA. Gerlach, J. (2015), ‘Improving Statistical Machine Translation of Informal Language: A Rule-based Pre-editing Approach for French Forums’, Doctoral thesis, University of Geneva, Geneva. Gerlach, J., V. Porro Rodriguez, P. Bouillon and S. Lehmann (2013), ‘Combining Preediting and Post-editing to Improve SMT of User-generated Content’, in S. O’Brien, M. Simard and L. Specia (eds), Proceedings of the 2nd Workshop on Post-editing Technology and Practice, 45–53, Nice: EAMT. Görög, A. (2014), ‘Quality Evaluation Today: The Dynamic Quality Framework’, in Proceedings of Translating and the Computer 36, 155–64, Geneva: Editions Tradulex.

354

The Bloomsbury Companion to Language Industry Studies

Green, S., J. Heer and C. D. Manning (2013), ‘The Efficacy of Human Post-editing for Language Translation’, in Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, 439–48, Paris: ACM. Guerberof, A. (2009), ‘Productivity and Quality in MT Post-editing’, MT summit XIIWorkshop: Beyond Translation Memories: New Tools for Translators MT, Ottawa: EAMT. Guerberof, A. (2012), ‘Productivity and Quality in the Post-editing of Outputs from Translation Memories and Machine Translation’, Doctoral thesis, Universitat Rovira i Virgili, Tarragona. Guerberof, A. (2013), ‘What do Professional Translators Think about Post-editing’, The Journal of Specialized Translation, 19: 75–95. He, Y., Y. Ma, J. Van Genabith and A. Way (2010), ‘Bridging SMT and TM with Translation Recommendation’, in Proceedings of the 48th Annual Meeting of ACL, 622–30, Uppsala: ACL. Hu, C., B. B. Bederson and P. Resnik (2010), ‘Translation by Iterative Collaboration Between Monolingual Users’, in Proceedings of Graphics Interface, 39–46, Ottawa: Canadian Information Processing Society. Huang, F., J. M. Xu, A. Ittycheriah and S. Roukos (2014), ‘Improving MT Postediting Productivity with Adaptive Confidence Estimation for Document-specific Translation Model’, Machine Translation, 28(3–4): 263–80, Netherlands: Springer. Jääskeläinen, R. and S. Tirkkonen-Condit (1991), ‘Automatised Processes in Professional vs. Non-professional Translation: A Think-aloud Protocol Study’, in Empirical Research in Translation and Intercultural Studies, 89–109, Tübingen: Narr. Jakobsen, A. L. (2003), ‘Effects of Think-aloud on Translation Speed, Revision and Segmentation’, in F. Alves (ed.), Triangulating translation: Perspectives in Processoriented Research, 45, 69–95, Amsterdam: John Benjamins. Kay, M. (1980/97), ‘The Proper Place of Men and Machines in Language Translation’, Research Report CSL-80-11, Xerox Palo Alto Research Center. Reprinted in Machine Translation, 12(1997), 3–23, Netherlands: Springer. Koehn, P. (2010), ‘Enabling Monolingual Translators: Post-editing vs. Options’, in Human Language Technologies: The 2010 Annual Conference of the North American Chapter of the ACL, 537–545, Los Angeles: ACL. Koponen, M. (2015), ‘How to Teach Machine Translation Post-editing? Experiences from a Post-editing Course’, in S. O’Brien, M. Simard and L. Specia (eds), Proceedings of the 4th Workshop on Post-editing Technology and Practice (WPTP 4), 2–15, Miami: AMTA. Koponen, M., W. Aziz, L. Ramos and L. Specia (2012), ‘Post-editing Time as a Measure of Cognitive Effort’, in S. O’Brien, M. Simard and L. Specia (eds), Proceedings AMTA 2012 Workshop on Post-editing Technology and Practice, 11–20, San Diego: AMTA. Krings, H. P. (2001), Repairing Texts: Empirical Investigations of Machine Translation Post-editing Processes, (Vol. 5). G. S. Koby, (ed.), Kent, OH: Kent State University Press.

Pre-Editing and Post-Editing

355

Kussmaul, P. and S. Tirkkonen-Condit (1995), ‘Think-aloud Protocol Analysis in Translation Studies’, in TTR 8, 177–99, Association canadienne de traductologie. Lacruz, I. and G. M. Shreve (2014), ‘Pauses and Cognitive Effort in Post-editing’, in Post-editing of Machine Translation: Processes and Applications, S. O’Brien, L. W. Balling, M. Simard, L. Specia and M. Carl (eds), 246–72, Newcastle upon Tyne: Cambridge Scholars Publishing. Lacruz, I., G. M. Shreve and E. Angelone (2012), ‘Average Pause Ratio as an Indicator of Cognitive Effort in Post-editing: A Case Study’, in S. O’Brien, M. Simard and L. Specia (eds), Proceedings of the AMTA 2012 Workshop on Post-editing Technology and Practice, 21–30, San Diego: AMTA. Läubli, S., M. Fishel, G. Massey, M. Ehrensberger-Dow and M. Volk (2013), ‘Assessing Post-editing Efficiency in a Realistic Translation Environment’, in Proceedings of the 2nd Workshop on Post-editing Technology and Practice, 83–91, Nice: EAMT. Laurenzi, A., M. Brownstein, A. M. Turner and K. Kirchhoff (2013), ‘Integrated Postediting and Translation Management for Lay User Communities’, in S. O’Brien, M. Simard and L. Specia (eds), Proceedings of the 2nd Workshop on Post-editing Technology and Practice, 27–34, Nice: EAMT. Laurian, A. M. (1984), ‘Machine Translation: What Type of Post-editing on What Type of Documents for What Type of Users’, in Proceedings of the 10th International Conference on Computational Linguistics and 22nd Annual Meeting on ACL, 236–8, Stanford: ACL. Loffler-Laurian, A. M. (1983), ‘Pour une typologie des erreurs dans la traduction automatique’. [Towards a Typology of Errors in Machine Translation], Multilingua (2): 65–78. Lommel, A. R. (2017), ‘Neural MT: Sorting Fact from Fiction’, Common Sense Advisory. Available online: http://www.commonsenseadvisory.com/AbstractView/ tabid/74/ArticleID/37893/Title/NeuralMTSortingFactfromFiction/Default.aspx (accessed 21 September 2018). Lommel, A. R. and D. A. De Palma (2016a), ‘Post-editing Goes Mainstream’, Common Sense Advisory. Available online: http://www.commonsenseadvisory.com/Portals/_ default/Knowledgebase/ArticleImages/1605_R_IP_Postediting_goes_mainstreamextract.pdf (accessed 21 September 2018). Lommel, A. R. and D. A. De Palma (2016b), ‘TechStack: Machine Translation’, Common Sense Advisory. Available online: http://www.commonsenseadvisory. com/Portals/_default/Knowledgebase/ArticleImages/1611_QT_Tech_Machine_ Translation-preview.pdf (accessed 21 September 2018). Lörscher, W. (1991), ‘Thinking-aloud as a Method for Collecting Data on Translation Processes’, in Empirical Research in Translation and Intercultural Studies: Selected Papers of the TRANSIF Seminar, Savonlinna 1988, 67–77, Tübingen: G. Narr. Melby, A. K. (1982), ‘Multi-level Translation Aids in a Distributed System’, in Proceedings of the 9th Conference on Computational Linguistics, 215–20, Academia Praha.

356

The Bloomsbury Companion to Language Industry Studies

Mellinger, C. D. (2017), ‘Translators and Machine Translation: Knowledge and Skills Gaps in Translator Pedagogy’, The Interpreter and Translator Trainer, 1–14, Taylor & Francis Online. Mesa-Lao, B. (2014), ‘Speech-enabled Computer-aided Translation: A Satisfaction Survey with Post-editor Trainees’, in Proceedings of the Workshop on Humans and Computer-assisted Translation, 99–103, Gothenburg: ACL. Mitchell, L., S. O’Brien and J. Roturier (2014), ‘Quality Evaluation in Community Postediting’, Machine translation, 28 (3–4): 237–62, Netherlands: Springer. Mitchell, L., J. Roturier and S. O’Brien (2013), ‘Community-based Post-editing of Machine-translated Content: Monolingual vs. Bilingual’, in S. O’Brien, M. Simard and L. Specia (eds), Proceedings of the 2nd Workshop on Post-editing Technology and Practice, 35–45, Nice: EAMT. Miyata, R., A. Hartley, K. Kageura and C. Paris (2017), ‘Evaluating the Usability of a Controlled Language Authoring Assistant’, The Prague Bulletin of Mathematical Linguistics, 108 (1): 147–58. Moorkens, J. and S. O’Brien (2013), ‘User Attitudes to the Post-editing Interface’, in S. O’Brien, M. Simard and L. Specia (eds), Proceedings of 2nd Workshop on Postediting Technology and Practice, 19–25, Nice: EAMT. Moorkens, J. and S. O’Brien (2015), ‘Post-editing Evaluations: Trade-offs Between Novice and Professional Participants’, in Proceedings of EAMT, 75–81, Antalya: EAMT. Moran, J. (2012), ‘Experiences Instrumenting an Open-source CAT Tool to Record Translator Interactions’, in Expertise in Translation and Post-editing. Research and Application, L. W. Balling, M. Carl and A. L. Jakobsen, Copenhagen: Copenhagen Business School. Nitzke, J. (2016), ‘Monolingual Post-editing: An Exploratory Study on Research Behaviour and Target Text Quality’, Eyetracking and Applied Linguistics, 2: 83–108, Berlin: Language Science Press. O’Brien, S. (2002), ‘Teaching Post-editing: A Proposal for Course Content’, in Proceedings for the 6th Annual EAMT Conference. Workshop Teaching Machine Translation, 99–106, Manchester: EAMT. O’Brien, S. (2003), ‘Controlling Controlled English: An Analytical of Several Controlled Language Rule Sets’, in Proceedings of EAMT-CLAW 2003, 105–14, Dublin: EAMT. O’Brien, S. (2005), ‘Methodologies for Measuring the Correlations Between Postediting Effort and Machine Translatability’, Machine translation, 19 (1): 37–58, Netherlands: Springer. O’Brien, S. (2006), ‘Eye-tracking and Translation Memory Matches’, Perspectives: Studies in Translatology, 14 (3): 185–205. O’Brien, S. (2011), ‘Towards Predicting Post-editing Productivity’, Machine Translation, 25 (3): 197–215, Netherlands: Springer. O’Brien, S. (2017), ‘Machine Translation and Cognition’, in J. W. Schwieter and A. Ferreira (eds), The Handbook of Translation and Cognition, 311–31, Wiley-Blackwell.

Pre-Editing and Post-Editing

357

O’Brien, S. and S. Castilho (2016), ‘Evaluating the Impact of Light Post-editing on Usability’, in Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC’16), 310–6, Portorož: European Language Resources Association (ELRA). O’Brien, S. and M. Simard (2014), ‘Introduction to Special Issue on Post-editing’, Machine Translation, 28 (3–4), Netherlands: Springer. O’Brien, S., J. Moorkens and J. Vreeke (2014), ‘Kanjingo–A Mobile App for Postediting’, in S. O’Brien, M. Simard and L. Specia (eds), Proceedings of the 3rd Workshop on Post-editing Technology and Practice, 125–7, Vancouver: AMTA. O’Curran, E. (2014), ‘Translation Quality in Post-edited Versus Human-translated Segments: A Case Study’, in S. O’Brien, M. Simard and L. Specia (eds), Proceedings of the 3rd Workshop on Post-editing Technology and Practice, 113–18, Vancouver: AMTA. Ortiz Boix, C. and A. Matamala (2016), ‘Implementing Machine Translation and Postediting to the Translation of Wildlife Documentaries through Voice-over and Offscreen Dubbing’, Doctoral thesis, Universitat Autònoma de Barcelona, Barcelona. Ortiz-Martínez, D., G. Sanchís-Trilles, F. Casacuberta, V. Alabau, E. Vidal, J. M. Benedı and J. González (2012), ‘The CASMACAT Project: The Next Generation Translator’s Workbench’, in Proceedings of the 7th jornadas en tecnología del habla and the 3rd Iberian SLTech Workshop (IberSPEECH), 326–34. Paladini, P. (2011), ‘Translator’s Productivity Increase at CA Technologies’, Localization World Conference, Barcelona. Parra Escartín, C. and M. Arcedillo (2015a), ‘A Fuzzier Approach to Machine Translation Evaluation: A Pilot Study on Post-editing Productivity and Automated Metrics in Commercial Settings’, in Proceedings of the 4th Workshop on Hybrid Approaches to Translation HyTra@ ACL, 40–5, Beijing: ACL. Parra Escartín, C. and M. Arcedillo (2015b), ‘Living on the Edge: Productivity Gain Thresholds in Machine Translation Evaluation Metrics’, in S. O’Brien, M. Simard and L. Specia (eds), Proceedings of the 4th Workshop on Post-editing Technology and Practice (WPTP4), 46, Miami: AMTA. Pérez-Ortiz, J. A., D. Torregrosa and M. L. Forcada (2014), ‘Black-box Integration of Heterogeneous Bilingual Resources into an Interactive Translation System’, in Workshop on Humans and Computer-assisted Translation, 57–65, Gothenburg: ACL. Plitt, M. and F. Masselot (2010), ‘A Productivity Test of Statistical Machine Translation Post-editing in a Typical Localisation Context’, The Prague Bulletin of Mathematical Linguistics, 93: 7–16. Popović, M., A. R. Lommel, A. Burchardt, E. Avramidis and H. Uszkoreit (2014), ‘Relations Between Different Types of Post-editing Operations, Cognitive Effort and Temporal Effort’, in The 7th Annual Conference of the EAMT, 191–198, Dubrovnik: EAMT. Pym, A. (2013), ‘Translation Skill-sets in a Machine-translation Age’, Meta, 58 (3): 487–503.

358

The Bloomsbury Companion to Language Industry Studies

Rico, C. and E. Torrejón (2012), ‘Skills and Profile or the New Role of the Translator as MT Post-editor. Postedició, Canvi de Paradigma?’ Revista Tradumàtica: technologies de la traducció, 10: 166–78. Roturier, J., L. Mitchell, D. Silva and B. B. Park (2013), ‘The ACCEPT Post-editing Environment: A Flexible and Customisable Online Tool to Perform and Analyse Machine Translation Post-editing’, in S. O’Brien, M. Simard and L. Specia (eds), Proceedings of 2nd Workshop on Post-editing Technology and Practice, 119–28, Nice: EAMT. Roukos, S. R., F. M. Xu, J. P. Nesta, S. M. Corriá, A. Chapman and H. S. Vohra (2011), ‘The Value of Post-editing: IBM Case Study’, Localization World Conference, Barcelona. Saldanha, G. and S. O’Brien (2013), Research Methodologies in Translation Studies, London: Routledge. Sanchís-Trilles, G., V. Alabau, C. Buck, M. Carl, F. Casacuberta, M. García-Martínez and L. A. Leiva (2014), ‘Interactive Translation Prediction Versus Conventional Post-editing in Practice: A Study with the CasMaCat Workbench’, Machine Translation, 28 (3–4): 217–35, Netherlands: Springer. Schäffer, F. (2003), ‘MT Post-editing: How to Shed Light on the “Unknown Task”. Experiences at SAP’, Controlled Language Translation, EAMT-CLAW-03, 133–40, Dublin: EAMT. Séguinot, C. (1996), ‘Some Thoughts about Think-aloud Protocols’, Target 8, 75–95. Senez, D. (1998), ‘The Machine Translation Help Desk and the Post-editing Service’, Terminologie et Traduction, 1: 289–95. Specia, L. (2011), ‘Exploiting Objective Annotations for Measuring Translation Postediting Effort’, in Proceedings of the 15th Annual EAMT Conference, 73–80, Leuven: EAMT. Specia, L., N. Cancedda, M. Dymetman, M. Turchi and N. Cristianini (2009a), ‘Estimating the Sentence-level Quality of Machine Translation Systems’, in Proceedings of the 13th Annual Conference of the EAMT, 28–35, Barcelona: EAMT. Specia, L., C. Saunders, M. Turchi, Z. Wang and J. Shawe-Taylor (2009b), ‘Improving the Confidence of Machine Translation Quality Estimates’, in Proceedings of the 12th MT Summit, 136–43, Ottawa: AMTA. Tatsumi, M. (2010), ‘Post-editing Machine Translated Text in a Commercial Setting: Observation and Statistical Analysis’, Doctoral thesis, Dublin City University, Dublin. Teixeira, C. (2011), ‘Knowledge of Provenance and its Effects on Translation Performance in an Integrated TM/MT Environment’, in Proceedings of the 8th International NLPCS workshop–Special theme: Human-Machine Interaction in Translation. Copenhagen Studies in Language 41, 107–18, Copenhagen: Samfundslitteratur.

Pre-Editing and Post-Editing

359

Teixeira, C. (2014), ‘Perceived vs. Measured Performance in the Post-editing of Suggestions from Machine Translation and Translation Memories’, in S. O’Brien, M. Simard and L. Specia (eds), Proceedings of the 3rd Workshop on Post-editing Technology and Practice, 45–59, Vancouver: AMTA. Temnikova, I. P. (2010), ‘Cognitive Evaluation Approach for a Controlled Language Post-editing Experiment’, in Proceedings of the Seventh International Conference on Language Resources and Evaluation (LREC’10), 3485–90, Malta: European Language Resources Association (ELRA). Temnikova, I. P., W. Zaghouani, S. Vogel and N. Habash (2016), ‘Applying the Cognitive Machine Translation Evaluation Approach to Arabic’, in Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC’16), 3644–51, Portorož: European Language Resources Association (ELRA). Toral, A., M. Wieling and A. Way (2018), ‘Post-editing Effort of a Novel with Statistical and Neural Machine Translation’, Frontiers in Digital Humanities, 5: 9. Turchi, M., M. Negri, M. Federico and F. F. B. Kessler (2015), ‘MT Quality Estimation for Computer-assisted Translation: Does it Really Help?’, in Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing, 530–5, Beijing: ACL. Underwood, N. L., B. Mesa-Lao, M. García-Martínez, M. Carl, V. Alabau, J. GonzálezRubio and F. Casacuberta (2014), ‘Evaluating the Effects of Interactivity in a Post-editing Workbench’, in Proceedings of the Ninth International Conference on Language Resources and Evaluation, 553–9, Reykjavik: European Language Resources Association (ELRA). Vasconcellos, M. (1986), ‘Post-editing on Screen: Machine Translation from Spanish into English’, in C. Picken (ed.), Translating and the Computer 8: A Profession on the Move, 133–46, London: Aslib. Vasconcellos, M. (1989), ‘Cohesion and Coherence in the Presentation of Machine Translation Products’, in J. E. Alatis (ed.), Georgetown University Round Table on Languages and Linguistics, 90–105, Washington DC: Georgetown University Press. Vasconcellos, M. (1992), ‘What Do We Want from MT?’, Machine Translation, 7(4): 293–301, Netherlands: Springer. Vasconcellos, M. (1993), ‘Machine Translation. Translating the Languages of the World on a Desktop Computer Comes of Age’, BYTE, 153–64, McGraw Hill. Vasconcellos, M. and M. León (1985), ‘SPANAM and ENGSPAN: Machine Translation at the Pan American Health Organization’, Computational Linguistics, 11 (2–3): 122–36, Cambridge (MA): MIT Press. Vieira, L. N. (2014), ‘Indices of Cognitive Effort in Machine Translation Post-editing’, Machine Translation, 28 (3–4): 187–216, Netherlands: Springer. Vieira, L. N. (2016), ‘Cognitive Effort in Post-editing of Machine Translation: Evidence from Eye Movements, Subjective Ratings, and Think-aloud Protocols’, Doctoral thesis, Newcastle University, Newcastle.

360

The Bloomsbury Companion to Language Industry Studies

Vilar, D., J. Xu, L. F. d’Haro and H. Ney (2006), ‘Error Analysis of Statistical Machine Translation Output’, in Proceedings of Fifth Edition European Language Resources Association, 697–702, Genoa: European Language Resources Association (ELRA). Wagner, E. (1985), ‘Rapid Post-editing of Systrans’, in V. Lawson (ed.), Translating and the Computer 5: Tools for the Trade, 199–213, London: Aslib. Wagner, E. (1987), ‘Post-editing: Practical Considerations’, in C. Picken (ed.), ITI Conference I: The Business of Translating and Interpreting, 71–8, London: Aslib. Way, A. (2013), ‘Traditional and Emerging Use-cases for Machine Translation’, in Proceedings of Translating and the Computer 35, 12, London: Aslib. Zhechev, V. (2012), ‘Machine Translation Infrastructure and Post-editing Performance at Autodesk’, in S. O’Brien, M. Simard and L. Specia (eds), Proceedings AMTA 2012 Workshop on Post-editing Technology and Practice (WPTP 2012), 87–96, San Diego: AMTA.

16

Advances in interactive translation technology Michael Carl and Emmanuel Planas

1. Introduction The history of interactive and computer-aided translation (CAT) is strongly interconnected with the history of machine translation (MT).1 In this section, we will discuss important steps in the development of the term ‘interactive translation’. For Hutchins and Somers (1992: 5), the antecedents of MT are mechanical dictionaries, as suggested by Descartes and Leibniz, and produced by Cave Beck, Athanasius Kircher and Johann Becher in the seventeenth century. In their work, Introduction to Machine Translation, Hutchins and Somers report that in 1933, long before the computer was invented,2 the FrenchArmenian George Artsrouni and the Russian Petr Smirnov-Troyanskii worked on two different CAT devices that can be seen as predecessors of MT. Artsrouni proposed to store a dictionary on a paper tape-based, general-purpose storage device that would be queried automatically to show word translations that could be read or reused by a translator. Smirnov-Troyanskii suggested a multi-step MT system in which the machine and a human would collaborate in a form of interactive translation; a human operator would first transform each source word into its basic form using a specific editor. The machine would then translate each source text (ST) word into a basic word form of the target language and a reviser would generate the final sentences of the target text (TT). Since then, many ideas, techniques and computer programs have been suggested and developed that allow for some form of interactive translation. Due to inherent difficulties in achieving fully automatic high-quality MT (FAHQT, cf. Bar-Hillel 1960), the ALPAC report (ALPAC 1966) already recommended the development of machine aids for translators with the assumption that this would lower costs by ensuring the required translation quality.

362

The Bloomsbury Companion to Language Industry Studies

The current translation workflow is dominated by the use of computers and CAT tools. Modern translation workstations (TWS) for professional translators include term bases and a database which stores whole sentences or phrases (called segments) and their translations. TWS include a graphical interface which allows for interactive translation production. This is usually organized as a table, showing the ST and the TT in two parallel windows which can be aligned left/right or top/bottom. It allows the translator to concentrate on text editing, independent from the original document format.3 The results of various surveys indicate that nowadays the majority of translation companies and institutions, and the majority of freelance translators, use translation workstations while translating (see EhrensbergerDow et al. 2016). TWS can, thus, be regarded as an established technology which supports professional translation production and which has opened the possibility for further formalization and innovation of the translation process. When a translator translates a new document in a TWS, the segments are matched against those already present in the translation memory (TM), which can then be reused for informing the new translations. The translator can also update the TM with his own new translations. Bowker (2002) uses the term interactive translation in the context of TWS when translators build the TM while working on a translation assignment. Classic TWS offer the translations of complete sentences or phrases, as retrieved from a database, without recomposition. Rule-based MT (RBMT) and statistical MT systems (SMT), in contrast, parse sentences into smaller parts. They translate the parts individually and recompose the translated segments into larger textual units. Due to the fact that smaller parts are reused in different contexts, MT systems achieve broader coverage, with a trade-off in reduced quality. MT systems, thus, do not guarantee high-quality translations on their own, but rather provide broader coverage in a more cost-effective fashion. However, in situations where high-quality translations are required, a human agent must intervene to guarantee the required quality. As we will point out in Section 2, a human intervention to ameliorate MT quality can occur in a pre-editing step to disambiguate the ST before it is sent to the MT system, in a post-editing (PE) step correcting the MT output (PEMT) or interactively together with the MT system. To date, this intervention has been achieved in most cases as PEMT and the architecture of most TWS lends itself naturally for this extension. There is no doubt that the success of PEMT depends on high-quality MT output and the

Advances in Interactive Translation Technology

363

availability of suited TWS. Better MT output leads to quicker PE cycles and increases productivity and efficiency (Specia 2011). The acceptance of PEMT is likely to increase with the emergence of neural MT (NMT) systems, which have recently produced higher-quality translations than SMT systems (e.g. Toral and Sánchez-Cartagena 2017). Novel forms of interactive MT (IMT) are achieved with data-driven MT systems (SMT or NMT), where translation models can be extended with additional information and user feedback at runtime. Data-driven IMT may become a more sophisticated alternative to PEMT, in which translators can work in a fashion similar to a PEMT set-up or in a situation that resembles fromscratch translation, but where translators additionally receive autocomplete suggestions from an MT engine. Current state-of-the-art IMT systems consist of an iterative predictioncorrection cycle. Each time the user enters or corrects a part of a translation, the IMT system reacts by offering an enhanced hypothesis for a translation continuation, which is expected to be better than the previous one. Thus, while classical PEMT is a sequential process in which a post-editor amends static MT output, IMT is an iterative process in which the human translator may (or may not) accept translation hypotheses which are dynamically generated through the IMT engine. Translation suggestions and translation completions are generated and adjusted ad hoc, based on the translators’ typing behaviour. IMT thus might overcome the weakness of PEMT, which has been shown to prime the translator (e.g. Culo et al. 2014) and hamper the search for better, more idiomatic translations. Thus, IMT systems generally do not limit the translator’s creativity or, if they do, only to a limited extent (Green, Heer and Manning 2013). In addition, due to real-time interaction, online-learning capacities may be instrumental for IMT systems to learn from previous editing steps and avoid making the same errors over and over again, which is one of the issues many post-editors of conventional MT output find annoying. Due to the complexity of IMT, these systems are usually implemented as services on (remote) servers which provide a browser interface similar to a TWS. This lends itself ideally to collaborative and interactive cloud-based translation. Just like other translation tools (such as Wordbee, Trados, MateCat and MotaWord), IMT systems (such as CASMACAT or Lilt) may be extended with collaborative functionalities offering consecutive text units to different translators. A document can, thus, be split up into many segments, each of which to be translated independently in a collaborative cloud platform by many translators at the same time.

364

The Bloomsbury Companion to Language Industry Studies

IMT and the increasing de-contextualization and fragmentation of translated texts is likely to usher in even more radical changes in the translation process. For instance, a collaborative IMT system with online-learning capacities makes it possible to immediately propagate partial translation solutions into a community working on the same project. This real-time synchronization has the potential to exert larger control and flexibility over the translation flow. Given that translation solutions are gathered in real time on the translation server, a greater ease in maintaining the consistency of the translated documents is established. Translators can request new segments to translate on the fly and dynamically assign their appropriate workload. MotaWord,4 for instance, declares itself to be ‘The World’s Fastest Human Translation Platform’, which is based on a collaborative cloud platform ‘coordinated efficiently through a smart back end’ in which over 5,600 translators participate. The potentials of this technology depend on the underlying user models, which are still very restricted and take only a small set of supportive functions into account. Advanced IMT models constitute a substantial, innovative step in future translation practice which need be grounded in empirical and cognitivetheoretical investigations. One has to cope, for instance, with the fact that translators are used to conceptualizing a text as a sequence of coherent segments with which they interact in a sequential manner, sentence after sentence, word after word (see, for example, Dragsted and Carl 2013). However, IMT combined with crowdsourcing is likely to fundamentally change this practice, which could be labelled ‘interactive non-coherent cloud-based translation’ – but the linguistic, social and psychological impact of this change is still largely unresearched. We will trace the development of translation technology in more detail in Section 2. In Sections 3 and 4, we will discuss two fundamental parameters, translation quality and translation effort, which have to be assessed in the light of this new technology.

2.  Research focal points The focal points of research into interactive translation technology are essentially aligned with the four principles by which MT systems may work, as suggested by Bruderer (1978): 1. fully automatic MT (FAMT), which is possible with limited quality for easy texts and restricted domains, but which will not be addressed in this chapter;

Advances in Interactive Translation Technology

365

2. pre-editing, mainly to reduce ambiguities and to simplify the ST for better MT results; 3. PE, to amend grammar and style of the MT output; 4. interactive MT, where the translator intervenes in dialogue with the MT system. We will follow Bruderer’s classification in the following review of interactive translation technologies, in which we will also extend the notion of ‘interactive’ along the lines presented in the introduction.

2.1. Pre-editing The aim of pre-editing is to improve the ST in order to improve the raw quality of the MT output and to reduce the amount of work required in the PE process. This process is most typically used when a document will be translated into several languages. Pre-editing can be achieved by means of controlled languages (CLs), which are usually developed for large companies to restrict the complexity of their documentation, to ensure that a consistent corporate image is promoted and to optimize the document production and reception processes. As CLs restrict the vocabulary, syntax and surface characteristics of texts, they are also instrumental to increasing the quality of MT output, if the MT system is tuned to the structure of that particular CL. A CL is efficient if its deployment does not entail more effort than the amount of work that goes into PE, or if a text is to be translated into several target languages. A CL is mostly used for translation of technical documentation and also may increase the match rate in a TM system. A number of tools exist that can be used to facilitate the production of a text which conforms to a CL. Mitamura and Nyberg (2001) distinguish between prescriptive and proscriptive CLs: 1. prescriptive: specify all structures that are allowed 2. proscriptive: specify all structures that are not allowed

2.2.  Post-editing (PE) Today, PEMT is by far more common than pre-editing, and first versions were already developed very early. Booth (1963) mentions that useful forms of PE already existed in the early 1960s. Words were separated into stems and endings and looked up in two separate dictionaries (presumably to save memory). Stems

366

The Bloomsbury Companion to Language Industry Studies

represent the meanings, and the endings the grammatical properties. Translations of the stems and endings would then be given to humans to produce a proper text from this bag of pieces. Today, PEMT is often combined in a TWS, as an add-on to a TM if TM matches are below a given threshold. PEMT becomes the main point of human-computer interaction in translation.

2.3.  Interactive MT Interactive MT (IMT) systems work – according to Bruderer (1978) – similar to pre-editing (as of the mid-70s), where the human intervenes in the automatized translation processes to disambiguate ST words, or provide information for missing words in the lexicon, etc. However, as mentioned in the introduction, the term ‘interaction’ relates to a number of different concepts, which include (a) the basic idea of a cooperation between a human and the machine based on electronic (bilingual) lexicons; (b) the use of a special software optimized for translation editing, revision and post-editing (e.g. TWS); (c) the feedback and integration of translation proposals into a TM; (d) co-editing of the ST by human and machine; (e) dynamic co-modification by humans and the machine of the emerging translation at the cursor position, possibly extended with online learning of the MT models; (f) crowd-based translation, which can be further decomposed in the simultaneous co-sharing of translation resources by several translators and the simultaneous editing by several translators of the same document and, finally, (g) the idea that the consecutive sequences of text units as they appear in the original text can be reshuffled to optimize translation costs and translation production times, or to better support MT backend.

2.4.  Rule-based IMT Among the earliest IMT systems is that of Bisbey and Kay (1972), who developed the MIND translation system, which enables a ‘monolingual consultant to resolve ambiguities in the translation process’. Melby, Smith and Peterson (1980: 424) also ‘hoped that interaction could be restricted to analysis … [but] it was found that some interaction was also required in transfer’. According to Weaver (1988), the aim of RBMT-based IMT is to help the computer produce better draft translations, which reduce the amount of successive post-editing. He mentions several forms of interaction, but most common is lexical interaction

Advances in Interactive Translation Technology

367

by which a human specialist provides an ST parser with additional (semantic) constraints in helping the system to find better translations. Similarly, Blanchon (1991) and Boitet et al. (1995) model the interaction in a dialoguebased pre-translation process on the source side in order to disambiguate the source linguistic analysis of the text to be translated. As a disadvantage of early RBMT-based IMT, Melby, Smith and Peterson (1980) mention that the ‘on-line interaction requires specially trained operators’, which may not be available or costly to train. To overcome this shortcoming, Blanchon and Boitet, in their dialogue-based IMT system, suggest choosing a disambiguating item from a pre-processed list that is supposed to be understandable by annotators without a computational background as well. In contrast to pre- and post-editing, in IMT there is thus an automatized component preceding and following human intervention. While the concept of pre- and post-editing has – to a large extent – remained similar over the past decades, IMT technology has considerably changed with changing translation technology in the past thirty years. In RBMT systems, it was assumed that a representation of the ST can be found which allows for an (almost) deterministic generation of the TT. Yet, after several decades of research, this has turned out to be an impossible task for general texts and arbitrary target languages. There are many and much more difficult decisions to be taken during the generation of the TT which are much more difficult to formalize than during the analysis of the ST – and these TT-specific decisions cannot be anticipated during an analysis of the ST.5

2.5.  Statistical IMT With the development of data-driven MT in the 90s, the focus has shifted away from the analysis of the ST to the generation of the TT. Whereas RBMT systems focus on ST language modelling (e.g. through syntactic and/or semantic analysis), SMT systems focus on probabilistic TT language models. Similar to RBMT, SMT systems also decompose complex translation problems into smaller parts.6 However, in contrast to RBMT, SMT systems estimate probability distributions for each component and these probabilities help in selecting the best option(s) from a vast amount of possible translations. Thus, thousands of decisions necessary for the translation of a single sentence are accumulated and postponed until the very end of the processing chain when the target string is generated, aiming to find a best solution by weighing the relative importance of each of the decisions.

368

The Bloomsbury Companion to Language Industry Studies

It is in this final decision process that humans intervene during SMT-based IMT. That is, instead of helping the computer to find a better analysis of the ST, in SMT-based IMT, interaction takes place on the TT side, where humans select appropriate translations. This focus on the TT opened new possibilities for the design of interaction. TransType (Langlais and Lapalme 2002) is the first SMT-based IMT system where the interaction takes place within production of the TT itself. According to Macklovitch (2001: 16), TransType’s role is supportive; it assists the translator by speeding up the entry of the target text and by suggesting translations when the translator is stumped or otherwise runs out of inspiration. However, the translator is always free to accept or to ignore the system’s suggestions. In the worst case, s/he simply continues to key in the intended target text, as though the system were no more than a text editor.

From a translator’s point of view, ‘the system can be conceived as a kind of accelerated typewriter designed to speed up the translator’s work by automatically completing the target unit that s/he has begun to key in’. TransType (Foster, Langlais and Lapalme 2002) and TransType2 (Macklovitch 2006) offer an interaction on the TT while the translator edits it. TransType proposes translation text completion at the translator’s cursor. Textprediction is based on several statistical models. A translation prediction model provides hypotheses for the translation continuation with a probability. A user model optimizes the user benefit, and a third model takes into consideration the supplementary cognitive load generated by reading the proposal as well as the ‘random nature of the decision to accept it or not’. The system is tuned with the analysis of the data collected during previous use of the tool by translators. Foster, Isabelle and Plamondon (1997) show how TransType can predict up to 70 per cent of the characters in the translator’s intended TT. Experiments with TransType (2002) show a 10 per cent gain in translation time. Some later experiments with TransType2 (2006) with real translation agency translators showed a productivity increase from 15 to 55 per cent, depending on the text to be translated.

2.6.  Crowd-based IMT A continuation of the TransType2 experiment was carried out in the context of the CASMACAT7 project (Alabau et al. 2013). The objective of CASMACAT was to devise SMT-based IMT interfaces based on the outcome of cognitive studies on translator behaviour. CASMACAT implemented a browser-based

Advances in Interactive Translation Technology

369

IMT system of the same name, featuring a number of innovative techniques, such as online learning and various visualization options of IMT. CASMACAT enabled studies that lead to valuable insights into translator behaviour within an advanced computer-assisted translation workbench. In contrast to earlier IMT systems, CASMACAT and Lilt (see below) are browser-based tools with online-learning capacities. The technology opens completely new ways of collaboration and has the potential to change the translation process even more radically than current PEMT technology. Due to better control that can be exerted by the human translator in the translation process, SMT-based IMT is likely to be more easily accepted than more static forms of PEMT or RBMT-based IMT. A browser-based editor enables cloud computing, where the translation engine runs on a remote server, and interaction with the MT system happens through a browser-based editor. Online and realtime learning capacities of the MT engine make it possible to immediately share partial translation solutions with all collaborators on a translation project as they are produced. IMT thus seems to be the suited technology allowing for massive online collaboration (MOC) (Désilets 2007). It allows for novel business models, where TTs are fragmented into a large number of translation jobs which can be distributed over a large group of translators, while shared resources and partial translation solutions help ensure consistency of the translated text segments within a translation project, for example, with respect to terminology. This could be linked with the quality and time constraints of a client, so as to provide justin-time delivery. IMT thus supports a further move from ‘content being rolled out in a static, sequential manner’ to translated content being ‘integrated into a dynamic system of ubiquitous delivery’ (Cronin 2010: 498). The online, real-time learning capacities of MT systems now open the possibility for completely new approaches to translation production in which a human and an MT system collaborate together in an interactive process on an unprecedented scale. The MT system produces an initial segment of a translation which is validated or corrected by a human translator. Every correction is sent back to the MT server in real time and thereby immediately taken into account for the enhanced suggestions of successive translations of the same segment and the rest of the text. Depending on the type of model, word translations, structures, features or their weights can be adjusted to put the online-learning capacities in SMT into effect – and algorithms are also adapted for online learning for NMT (Peris, Domingo and Casacuberta 2017). It is hoped that with the human corrections in a feedback loop, the MT system may produce better translation proposals for the remaining part of the text or sentence that has not been validated

370

The Bloomsbury Companion to Language Industry Studies

yet. In this way, the human translator and machine iterate through the translation in a fashion where the system listens to and learns from the corrections with the aim of increasing the acceptability of the next translation proposals.

2.7.  Non-coherent IMT Related to the mechanism of online learning is a strategy of active learning and active interaction, in which an MT system may reorder the source-language segments to be translated and post-edited so as to maximize its expected learning effect. Instead of presenting a text in its original consecutive order, the system sorts the segments according to a degree of confidence so that it can learn most quickly or efficiently from the human corrections. This strategy can be implemented as a combination of active interaction and online learning (Gonzalez-Rubio and Casacuberta 2014). Pym (2011: 3) suggests that technology ‘disrupts linearity by imposing what Saussure called the paradigmatic axis of language … [i.e. the one] from which items are selected’. According to him, ‘texts become short segments, without narrative progression, and are presented and treated in isolation’ (6). Monti (2012: 793) points out that the ‘process of translation is deeply changed by the use of this new generation of [collaborative] translation technologies’. New technologies lead to novel kinds of text perception and text production where ‘dialogue, not narrative, may become part of a new humanization’ (Pym 2011: 7) In order to assess the impact of shuffling text segments on the translation process, in a pilot study, Báez, Schaeffer and Carl (2017) find evidence that presenting segments to be post-edited in a mixed-up, non-coherent order does not significantly impact post-editing performance in a negative manner. Based on gaze and key-logging measures, they show that neither the post-editing time nor the gazing patterns indicate a detrimental effect in the mixed-up postediting mode, as compared to a coherent post-editing.

3.  Informing research through the industry Industry’s contribution to research on interactive translation technology is more clearly visible in the iterative process of development, industrial implementation and feedback between researchers and users. This section will therefore concentrate on the way technological applications have fed back into research.

Advances in Interactive Translation Technology

371

3.1.  Term-based interaction MT systems were first developed in research labs in the 1950s and 1960s without the input of translators and were successively tested in many industrial environments and national institutions. As MT systems were not delivering good enough quality, translators were asked to ‘help’ the machine, often with PE. From this first contact between human translators and MT systems, new ideas about human translator support emerged. The first big termbases (TB) targeted at translators’ use were built in the 1960s: Dicautom at the European Coal and Steel Community (ECSC, the former EC, European Community) (1963), LEXIS at the German Federal Armed Forces Computer Center (1966) and BTUM (1971, later TERMIUM) at the University of Montreal. These institutions were to introduce some key ideas for the interaction between the translator and the computer. Among them, the ECSC built TB, a precursor of a concordance tool that would help translators find sentences with shared terms (ALPAC Report 1966). The the EC, the follow-up institution of the ECSC, used – for the first time – the term ‘translation memory’. Krollmann (1971) introduced the notion of indexes on existing translated sentences and phrases in the LEXIS system for the translation service of the German Army. As argued by Hutchins and Somers (1992: 5), TB can be considered a kind of interactive translation.

3.2.  Dedicated translation editing software The industry also contributed to the development of CAT tools. Schulz (1971) implemented an editable dictionary system (TEAM) for Siemens. Lippmann (1971), working at IBM in Yorktown Heights, proposed to use a specific Translation Editor (TE),8 as already alluded to by Smirnov-Troyanskii in 1933 (see Introduction; Arthern 1979). During the 1980s and 1990s, a new market for TWS developed, which led to the development of Canadian-CWARC (1987), Trados TED (1988), Trados Workbench (1991), Star Transit (1992), ALP-NET TSS system, IBM Translation Manager (1993), Atril Déjà Vu (1994) and Site Eurolang Optimizer (1998 SDL SDLX). Because there was now an interface dedicated to editing translations, interactive modes and strategies could successively be devised.

3.3.  Statistical MT In the 1990s, statistical approaches to MT were developed and the language industry and big institutions (the United Nations, the Canadian and European

372

The Bloomsbury Companion to Language Industry Studies

Parliaments) made parallel corpora available for fuelling SMT systems. Together with the availability of sentence, phrase and word alignment techniques and core SMT theory, a new area was launched. Early adopters of SMT engines, for example, Language Weaver (a spin-off from the University of Southern California, by Kevin Knight and Daniel Marcu) emerged, but many SMT systems remained research prototypes. Twenty years later, the availability of an open and free SMT decoder, Moses (Koehn et al. 2007), enabled the integration of SMT technology and company requirements within several industrial research projects, including EuroMatrix9 and Let’s MT,10 in which several companies were involved, for example, Tilde11 and Lucy Software.12 We can identify two main scenarios for industrial translation. The first scenario encompasses internal translation services like the very first-generation IBM Russian (RU) to English (EN) MT system for the US Air Force in the 1960s (ALPAC 1966) or the MT@EC system for the European Commission. This scenario involves an MT technology provider, sometimes the organization itself – like Philips’ Rosetta system in the 1990s (Landsbergen 1989) – and the organization that hires internal translators. Here, interaction between the translator and the MT system are targeting the best output quality and the best interaction experience for the internal translators. Feedback from translators is often central for further technological development and taken into consideration for interaction improvement. The other scenario is geared towards a more diverse market and involves a chain of actors: the LSP customer, the language service provider (LSP), the MT technology provider (MTTP) and the translators. In this configuration, the MT system is targeted at customers’ satisfaction, which is evaluated based on cost, time and translation quality. Translators’ comfort and interaction experience is a secondary target and their feedback is usually filtered by the LSP and the MTTP. In line with the requirements of the two configurations, the MT research community serving the MTTPs focuses on enhancing translation quality and/ or the experience of human–computer interaction. This development was accelerated in the 1990s through its new use over the internet. The industry took the lead in this technological development. MT servers were made available over the internet, such as Alta Vista (1995) using Babel Fish, Google first using Systran (2003), later SMT (2006) and now NMT (2016), Microsoft with Bing (2007) and Asia Online (2007), based on the Moses decoder. The stakeholders became so crucial that Google decided to launch its own research team, hiring one of the fathers of SMT – Franz Josef Och.

Advances in Interactive Translation Technology

373

3.4.  Statistical IMT More recently, IMT systems have been implemented as web interfaces (e.g. CASMACAT, MateCat and LILT) through browser-based PE interfaces, where the MT system(s) run(s) on a remote server. While CASMACAT is a noncommercial prototype, Lilt13 is the first commercial solution that uses browserbased IMT and online learning. Lilt is a US-based company that provides highquality web-based CAT with IMT. Similarly, SDL Trados has announced a new feature of its translation workbench, which they call adaptive MT.14 This comes along with novel conceptualizations of the translation workflow, which link these new possibilities of IMT with innovative usage of crowdsourcing. Companies also incorporate SMT-based IMT. Lilt, for example, aims at ‘Bridging Translation Research and Practice’ by providing high-quality webbased CAT with an IMT interface. It is the first commercial solution that uses IMT and online learning. Lilt is a monolithic solution, and its code is closedsourced. In order to fully make use of potentials in crowd sourcing, novel ways need to be found to split up coherent texts into units that can be translated and edited independently by a large number of translators at the same time. TMs, PEMT and, recently, IMT platforms are designed so that translators post-edit entire texts. Due to the increased demand for translation productivity and shorter translation turnaround, some translation tools (Wordbee, Trados, MateCat) are now being extended with collaborative functionalities offering consecutive text units to different translators.

3.5.  Crowd-based/non-coherent IMT Some LSP companies (Unbabel,15 MotaWord) are seeking possibilities to experiment with a more dynamic approach to collaborative translation that segments a document into smaller units so that each post-editor is assigned a small amount of segments. It is assumed that fragments of texts can be post-edited out of context and then reassembled into a translation product of sufficient quality. A document is thereby split – potentially – into hundreds of segments,16 which can be translated in a collaborative cloud platform by many more translators. Jiménez-Crespo (2017) argues that collaborative models change the processing of translations and shifts the responsibility ‘to the developers and managers in charge of setting up successful workflows and of sustaining motivation through community building’ (Penet 2018).

374

The Bloomsbury Companion to Language Industry Studies

However, it is unclear how translators cope with a situation in which segments – possibly from different parts of the same document – are presented out of context. While the increased quality of the MT output also increases the chances of successful PE (and IMT) even with a limited co-text, it is unclear how much co-text is actually required for a translator to complete the job and to produce acceptable translations. In a small study, Báez, Schaeffer and Carl (2017) investigate the impact of ‘non-coherent post-editing’ on translation behaviour and find no significant differences as compared to conventional PE. However, more research has to be carried out on different texts and with different translation tools to obtain a more comprehensive picture.

4.  Informing the industry through research The reason for the introduction of MT, IMT and CAT technologies in general is to increase translation productivity and to lower costs for translations (Green, Chuang and Schuster 2014), while at the same time ensuring translation quality. However, CAT technologies are only acceptable for a translator if they do not imply degrading working conditions and increased translation effort. There is thus a complex balance between the quality of the final translation product, the translation effort invested by the translator during the translation production and the usage of translation aides. What makes this equation difficult to solve is the fact that none of the involved variables is easy or uncontroversial. There are plenty of metrics to assess translation quality, which, however, do not always fit well with a client’s expectations. There are also many ways to measure translation effort, but the underlying cognitive processes involved in translation production are not well understood and depend on ill-defined parameters, such as translation expertise, attitude towards CAT technologies, etc. Finally, user interfaces of CAT tools themselves are constantly and quickly developing, but translators need training and time to experiment with and get used to each new feature in the GUI (Alabau et al. 2016). In order to shed light on these entangled relations, a substantial amount of research is being invested with the aim of clarifying the relation between translation quality and translation effort during translation and PE. Although we are only at the beginning of developing the basic concepts, measures and methods that are suited to capture the essential parameters in these processes, these aspects of assessing interactive translation technology

Advances in Interactive Translation Technology

375

are key to the way research is informing the industry and is likely to continue to do so. Translation quality is of greatest importance for the translation profession and, therefore, the assessment of translation quality should be the first priority in industry and academia alike. However, the debate on translation quality within translation studies (e.g. House 1997; Schäffner 1997) has been largely disconnected from the development of translation quality metrics in computational linguistics and for the assessment of MT. Automatic translation quality metrics evolved with the rise of SMT systems in the early 2000s. As mentioned above, SMT systems depend on thousands of decisions that are quantified by means of probabilities. These probabilities are encoded in various models (e.g. language models and translation models) that can be learned based on reference data (e.g. monolingual texts and aligned translations). SMT (and now also NMT) systems make use of machine learning methods with tune model parameters such that a generated output becomes close to a reference. This learning is an iterative process which requires fast and reliable quality assessment of the automatically produced translation and the computation of a gradient by which parameters can be adjusted, such as MERT (Och 2003). Various methods for fully automatic quality assessment have been developed, as described in the next two sections.

4.1.  Reference-based metrics The assumption for reference-based metrics is that ‘the closer a machine translation is to a professional human translation, the better it is’ (Papineni et al. 2002). Metrics include BLEU (Papineni et al. 2002), NIST (Doddington 2002), Word error rate (WER) and METEOR (Banerjee and Lavie 2005). They work either at the sentence level or the corpus level. BLEU counts the shared phrases in one or more human translations via a string-to-string match approach. NIST, a derivative of BLEU, gives more weight to the informality of n-gram in the TT and lessens the impact of brevity penalty on the overall score (Doddington 2002). WER (Tillmann et al. 1997) and its derivatives return the normalized length edit distance (number of insertions, deletions and substitutions) between the hypotheses (i.e. the MT output) and the reference translation. METEOR is a redesigned metric to overcome some inherent limitations of BLEU. This metric correlates well with human judgements of translation quality at the sentence level. It places an emphasis on recall in addition to precision.

376

The Bloomsbury Companion to Language Industry Studies

4.2.  Reference-free metrics Reference-free metrics are extended applications of confidence estimation in speech recognition (Gandrabur and Foster 2003). They make use of machine learning approaches to assess the quality (i.e. the probability of correctness) of a translation hypothesis on the basis of a selected feature set. This quality estimation (QE) is rather flexible and can be applied on a word level, phrase level or sentence level to estimate problematic text fragments within the MT output. For several years now, QE methods have been developed and compared within yearly shared tasks on QE.17 Fully automatic measures tend to be increasingly imprecise and unsuited for distinguishing among grades of bad, acceptable or good translations as the overall quality of MT output increases. Human evaluation is therefore (still) a necessary benchmark to assess translation quality. Within the computational linguistics community, a couple of human translation (HT) quality metrics are commonly used (see the next three sections).

4.3. Error-coding Human annotators identify and classify translation errors (words or phrases within a translation) based on an error taxonomy. A number of error classification typologies are available, such as Font-Llitjós, Carbonell and Lavie (2015) for MT, Castagnoli et al. (2006) for novice translators, DQF for professional HT and MT (Görög 2014), MQM (Lommel 2014) for HT and MT and the latest combination of DQF-MQM (TAUS 201618). MQM defines a total of over 100 issue types and integrates a number of error taxonomies.

4.4. Ranking Linguists are asked to rank alternative translations according to quality criteria, such as fluency and/or adequacy. Several translations of the same source sentence are shown, and linguists are asked to rank them according to their quality. Alternatively, a Likert Scale of, for instance, four or five levels may be used for each of the evaluation criteria to mark the quality of the translations.

4.5.  Post-editing (PE) PE can be seen as an activity to detect and resolve translation errors and, therefore, also as an MT error assessment. PE activities can be assessed

Advances in Interactive Translation Technology

377

in various ways: post-editors can be asked to report issues found in the MT output, their translation behaviour can be analysed and/or their brain activities can be scanned. The human-targeted translation error rate (HTER) (Snover et al. 2006) counts the minimal number of edits needed to change an MT output into a PE version and is believed to correlate with PE effort. Translation effort is of similar importance as the assessment of translation quality, since only the joint evaluation of both variables (quality and effort) can give a realistic indicator whether a given translation technology should be preferred over another one. While the ALPAC report mentions translation time (and, hence, price) as a means to evaluate the usefulness of translation assistance, the notion of ‘translation effort’ has evolved with the wider introduction of CAT to assess the ‘real’ translation workload. Krings (2001) makes a distinction between technical, temporal and cognitive translation effort. Technical effort consists of the keystrokes, cut-and-paste operations, etc. needed to produce the translation. Temporal effort refers to translation time, and cognitive effort to the mental processes involved in translation production or during PE. Even though cognitive effort can only be measured indirectly, for Krings it is the most important of the three measures. Cognitive effort has been approximated in various ways. HTER (see above) is a metric that actually measures technical effort in PEMT. It compares the machine translation of a sentence with its post-edited version and counts the minimum word-level changes that are needed to transform the MT output into the final version. The number of changes is then divided by the number of words in the edited sentence to obtain the translation edit rate (TER) score, which is attributed values between 0 (when no changes are made) and 1 (when the entire sentence is changed). HTER is often used as an approximation to PE effort. Pause-to-word ratio (PWR) (Lacruz and Shreve 2014) is another metric that indicates the normalized number of text production pauses per word. The PWR metric is based on the assumption that more interrupted text production is an indicator of higher cognitive effort. Carl et al. (2016) show that PWR values are different for translations into different languages and that from-scratch translation is more effortful than PE. Moorkens et al. (2015) compare subjective ratings of (perceived) PE effort with measures of keyboard activity (technical effort) and report that subjective ratings of cognitive effort do not always match well the empirical (keystrokes) findings.

378

The Bloomsbury Companion to Language Industry Studies

Hvelplund (2016) investigates the allocation of cognitive resources in translation based on the location of the gaze on the ST or the TT. He assumes that changes in the gaze location also reflect changes in the problem area that translators pay attention to. Viera (2016) compares measures of eye movements, subjective ratings and think-aloud protocols to assess cognitive effort for PEMT. Based on his findings, he advocates a more fine-grained measure to distinguish between intensity and amount of cognitive effort. All these studies assume that there is a connection between technical, temporal and cognitive effort in PE that relates somehow to the quality of the MT output that is being post-edited. Much less work has been done in the context of IMT (cf. Alves et al. 2016). It is, however, likely that additional evaluation criteria would have to be taken into account which are suited to assess the impact of online learning and collaboration in distributed IMT platforms.

5.  Concluding remarks While the general public makes increasing use of publicly available MT (e.g. Google Translate), translators have not always been very positive about this development or have often felt threatened. However, the majority of translators use, or are familiar with, translation workbenches (TWS), which can also be complemented with MT. Whereas the potentials of MT are still not fully explored, there seems to be an increasing awareness that MT is here to stay, and that it can also be useful for professional translators. Pym (2013: 3) notes that ‘in the age of electronic language technologies, texts are increasingly used paradigmatically’. We argue that the current development of data-driven, interactive MT (IMT) may provide a suited answer for translation professionals to cope with this new and emerging usage of texts. IMT with onlinelearning capacities which run on a collaborative translation platform is a highly innovative technology that has the potential to change the working practice and to mirror Pym’s paradigmatic usage of texts in the translation industry. Distributed over a community of translators, texts can be decomposed and translated as snippets, or hypertexts, lowering the perception of a text as a linear, coherent object. This requires, on the one hand, online learning and control functions on the translation platform to reproduce the coherence that is broken in the translation process and to guarantee the quality of the final translated product. On the other hand, it requires appropriate user models to interact, predict and

Advances in Interactive Translation Technology

379

respond to translation behaviour, so as to ensure user satisfaction and optimal productivity. As a consequence of electronic language technologies, it is possible that at least some segments of the translation profession will become increasingly industrialized and decontextualized, with a risk of further alienation between translators, customers and producers of the technology. However, Pym (2011) predicts only a modest impact of this alienation as ‘the social distance between design and use is not as extreme as it was in Taylorist production; the time gaps between user-feedback and technology redesign are vastly reduced’. Some of the possibilities that might bridge the feedback gaps include the following:

5.1. Surveys One way to obtain feedback from translators that can trigger research questions and inform ongoing research and development is through translator surveys. Several CAT surveys conducted by researchers include Lagoudagy (2006), the Mellange project (Armstrong et al. 2006), the EMT 2012 survey (Torres Dominguez 2012), Ehrensberger-Dow (2014), Zaretskaya, Corpas Pastor and Seghiri (2015), Gallego-Hernández (2015) and Moorkens and O’Brien (2017). Most of the time, the human translator has been seen as an element in the translation chain. Unfortunately, studies that investigate the role of the translator are often conducted by the translation industry or by translation companies rather than by translators themselves.

5.2.  Translator associations There are several national and international translator associations (e.g. ITI, SFT, BDÜ, ATA) which offer technical training and forums to share experiences and to develop best translation practices. However, we think that these associations could play a stronger role in the communication of translators’ interest to the translation industry and the development of translation technology. To date, the translators’ view is expressed by individual translators, including Bachrach (1971), Vaumoron (1998) and Arthern (1979) and, more recently, Parra Escartín and Arcedillo (2015). These initiatives could be taken up by translator associations, designing a framework where individuals are able to take the initiative. This could trigger the associations to launch proper studies aiming at investigating translators’ needs.

380

The Bloomsbury Companion to Language Industry Studies

5.3.  Cross-fertilization conferences and meetings These conferences involve different stakeholders, including translators, translation company owners, researchers, translation teachers and translation customers. One of the emblematic conferences is Translation and the Computer, which was launched in London in 1978 by Barbara Snell (1978), a translator at Xerox. At that time, she wanted translators to be aware of what is going on in MT. In the meantime, this tradition is common practice at many translator conferences (e.g. IATIS, FIT, EST), as well as translation technology-oriented conferences, such as the MT summit, AMTA, EAMT, etc.

Notes 1 We would like to thank Beatrice Daille and the Université de Nantes for a research invitation in 2017, which allowed the authors to draft this chapter. 2 The first specimen digital computers were developed during the Second World War: Colossus, ABC and ENIAC. The IBM 701, the first production computer, was released by IBM in 1952. See https://www-03.ibm.com/ibm/history/documents/ pdf/1885-1969.pdf 3 In- and out-filters manage the conversion into a ‘working’ format, almost always XLIFF now. 4 https://www.motaword.com 5 This amounts to saying that – to date – no general interlingual representation has been found from which it would be possible to generate a translation into every conceivable language. 6 This is different in the newly emerging Neuro-MT paradigm, which is trained as an end-to-end optimization. However, interactive solutions have also been proposed for NMT systems (Peris, Domingo and Casacuberta 2017). 7 Cognitive Analysis and Statistical Methods for Advanced Computer Aided Translation (http://www.casmacat.eu/). 8 The first text editor, TT, had been disclosed by IBM in 1967, vi editor in 1970, Emacs in 1976 and the first WYSIWYG text editor, Word Star, in 1979. 9 http://www.euromatrix.net/ 10 https://www.letsmt.eu/Login.aspx 11 https://www.tilde.com/products-and-services/machine-translation 12 http://www.lucysoftware.com/ 13 http://labs.lilt.com/ 14 http://www.sdltrados.com/products/trados-studio/adaptivemt/ 15 https://unbabel.com/

Advances in Interactive Translation Technology

381

16 Based on experience, unbabel splits texts into segments of two to three sentences of approximately fifty words. 17 See, for instance, http://www.statmt.org/wmt17/quality-estimation-task.html 18 https://www.taus.net/think-tank/news/press-release/dqf-and-mqm-harmonized-tocreate-an-industry-wide-quality-standard

References Alabau, V., R. Bonk, C. Buck, M. Carl, F. Casacuberta, M. García-Martínez, J. González, P. Koehn, L. Leiva, B. Mesa-Lao, D. Ortiz, H. Saint-Amand, G. Sanchis and C. Tsoukala (2013), ‘CASMACAT: An Open Source Workbench for Advanced Computer Aided Translation’, The Prague Bulletin of Mathematical Linguistics, 100: 101–12. Alabau, V., M. Carl, F. Casacuberta, M. García-Martínez, B. Mesa-Lao, D. Ortiz, J. González-Rubio, G. Sanchis and M. Schaeffer (2016), ‘Learning Advanced Post-editing’, in M. Carl, S. Bangalore and M. Schaeffer (eds), New Directions in Empirical Translation Process Research, 95–110, Cham: Springer. ALPAC (1966), ‘Languages and Machines: Computers in Translation and Linguistics’, A Report by the Automatic Language Processing Advisory Committee, Division of Behavioral Sciences, National Academy of Sciences, National Research Council, Washington DC: National Academy of Sciences, National Research Council. Alves, F., K. Sarto Szpak, J. Luiz Gonçalves, K. Sekino, M. Aquino, R. Araújo e Castro, A. Koglin, N. B. de Lima Fonseca and B. Mesa-Lao (2016), ‘Investigating Cognitive Effort in Postediting: A Relevance-theoretical Approach’, in S. Hansen-Schirra and S. Grucza (eds), Eyetracking and Applied Linguistics, 109–42, Berlin: Language, Science Press. Armstrong, S., G. Aston, T. Badia, S. Bernardini, G. Budin, S. Castagnoli, D. Ford, C. Gallois, T. Hartley, D. Ciobanu, N. Kübler, K. Kunz, M. Lunt, V. Rericha, N. Rotheneder, E. Steiner, A. Volanschi and A. Wheatley (2006), ‘Multilingual eLearning in LANGuage Engineering’, in Translating and the Computer 28 Conference, London, 16–17 November, 2006. Arthern, P. J. (1979), ‘Machine Translation and Computerized Terminology Systems: A Translator’s Viewpoint’, in B. M. Snell (ed.), Translating and the Computer, Proceedings of a Seminar, London, 14th November 1978, 77–108, North-Holland, Amsterdam. Banerjee, S. and A. Lavie (2005), ‘METEOR: An Automatic Metric for MT Evaluation with Improved Correlation with Human Judgments’, in Proceedings of the ACL 2005 Workshop on Intrinsic and Extrinsic Evaluation Measures for MT and/or Summarization. Available online: http://aclweb.org/anthology/W05-09 (accessed 31 December 2018).

382

The Bloomsbury Companion to Language Industry Studies

Báez, C. T., M. Schaeffer and M. Carl (2017), ‘Experiments in Non-coherent Postediting’, The First Workshop on Human-Informed Translation and Interpreting Technology, 11–20. Bachrach, J. A. (1971), ‘L’ordinateur au service du terminologue : maître ou esclave’, Meta, 16 (1–2): 105–15. Bar-Hillel, Y. (1960), ‘A Demonstration of the Non-feasibility of Fully Automatic High Quality Translation: The Present Status of Automatic Translation of Languages’, Advances in Computers 1: 158–63. Reprinted in Y. Bar-Hillel (1964), Language and information, 174–9, Reading, MA: Addison-Wesley. Bisbey, R. and M. Kay (1972), The MIND Translation System: A Study in Man-Machine Collaboration, Santa Monica, CA: RAND Corporation. Available online: https:// www.rand.org/pubs/papers/P4786.html (accessed 31 December 2018). Blanchon, H. (1991), ‘Problèmes de désambiguisation interactive et TAO personnelle’, L’environnement Traductionnel, 50 Journées scientifiques du Réseau thématique de recherche Lexicologie, terminologie, traduction, 31–48. Boitet, C., H. Blanchon, E. Planas, E. Blanc, J. P. Guilbaud, P. Guillaume, M. Lafourcade and G. Sérasset (1995), LIDIA-1.2, une maquette de TAO personnelle multicible, utilisant la messagerie usuelle, la désambiguïsation interactive, et la retrotraduction. Rapport Final, GETA-UJF. Booth, A. (1963), Beiträge zur Sprachkunde und Informationsverarbeitung 2, 8–16, reprinted as (1982), ‘Ursprung und Entwicklung der Mechanischen Sprachübersetzung’ in H. E. Bruderer (ed.), Automatische Sprachuebersetzung, 24–32, Darmstadt: Wissenschftliche Buchgesellschaft Darmstadt. Bowker L. (2002), Computer-aided Translation Technology: A Practical Introduction, Ottawa: University of Ottawa Press. Bruderer, H. E. (1978), Sprache – Technik – Kybernetik. Aufsätze zur Sprachwissenschaft, maschinelle Sprachverarbeitung, künstlichen Intelligenz und Computerkunst, Müsslingen bei Bern: Verlag Linguistik. Carl, M., I. Lacruz, M. Yamada, and A. Aizawa (2016), Comparing Spoken and Written Translation with Post-editing in the ENJA15 English – Japanese Translation Corpus. Paper presented at The 22nd Annual Meeting of the Association for Natural Language Processing. NLP 2016, Sendai, Japan. Available online: http://www.anlp. jp/proceedings/annual_meeting/2016/pdf_dir/E7-3.pdf (accessed 31 December 2018). Castagnoli, S., D. Ciobanu, K. Kunz, N. Kübler and A. Volanschi (2006), ‘Designing a Learner Translator Corpus for Training Purposes’, in Proceedings of Teaching and Language Corpora Conference TaLC 2006, 1–19, Université Paris VII. Cronin, M. (2010), ‘The Translation Crowd’, Revista Tradumàtica, 8, 1–7. Culo, O., S. Gutermuth, S. Hansen-Schirra and J. Nitzke (2014), ‘The Influence of Post-editing on Translation Strategies’ in S. O’Brien, L. Winther Balling, M. Carl, M. Simard and L. Specia (eds), Post-editing of Machine Translation: Processes and Applications, 200–18, Newcastle upon Tyne: Cambridge Scholars Publishing.

Advances in Interactive Translation Technology

383

Désilets, A. (2007), ‘Translation Wikified: How Will Massive Online Collaboration Impact the World of Translation?’, in Proceedings of Translating and the Computer 29, 29–30 November 2007. London, UK. Available online: http://www.mt-archive. info/Aslib-2007-Desilets.pdf (accessed 31 December 2018). Doddington, G. (2002), ‘Automatic Evaluation of Machine Translation Quality Using n-gram Co-occurrence Statistics’, in Proceedings of the Second International Conference on Human Language Technology Research (HLT ‘02), 138–45, San Francisco: Morgan Kaufmann Publishers Inc. Available online: http://www. mt-archive.info/HLT-2002-Doddington.pdf (accessed 31 December 2018). Dragsted, B. and M. Carl (2013), ‘Towards a Classification of Translation Style Based on Eye-tracking and Key Logging’, Journal of Writing Research, 5 (1): 133–58. Ehrensberger-Dow, M. (2014), ‘Challenges of Translation Process Research at the Workplace’, MonTI Monographs in Translation and Interpreting, 7: 355–83. Ehrensberger-Dow, M., A. Hunziker Heeb, G. Massey, U. Meidert, S. Neumann and H. Becker (2016), ‘An International Survey of the Ergonomics of Professional Translation’, ILCEA 27. Available online: http://ilcea.revues.org/4004 (accessed 31 December 2018). Font-Llitjós, A., J. Carbonell and A. Lavie (2005), ‘A Framework for Interactive and Automatic Refinement of Transfer-based Machine Translation’, Proceedings of the 10th Annual Conference of the European Association for Machine Translation (EAMT 05), Budapest, 87–96. Foster, G., P. Langlais and G. Lapalme (2002), ‘User-Friendly Text Prediction for Translators’, in Conference on Empirical Methods in Natural Language Processing (EMNLP 2002), 148–55. Available online: http://www.aclweb.org/anthology/W021020 (accessed 31 December 2018). Foster, G., P. Isabelle and P. Plamondon (1997), ‘Target-Text Mediated Interactive Machine Translation’, Machine Translation, 12: 175–94. Gallego-Hernández, D. (2015), ‘The use of Corpora as Translation Resources: A Study Based on a Survey of Spanish Professional Translators’, Perspectives: Studies in Translatology, 23 (3): 375–91. Gandrabur, S. and G. Foster (2003), ‘Confidence Estimation for Translation Prediction’, HLT-NAACL-2003, 95–102, Edmonton, Canada. Available online: http://www. aclweb.org/anthology/W03-0413 (accessed 31 December 2018). Gonzalez-Rubio, J. and F. Casacuberta (2014), ‘Cost-sensitive Active Learning for Computer-assisted Translation’, Pattern Recognition Letters, 37, 124–34. Görög, A. (2014), ‘Quality Evaluation Today: The Dynamic Quality Framework’, Translating and the Computer, 36. Available online: http://www.mt-archive.info/10/ Asling-2014-Gorog.pdf (accessed 31 December 2018). Green, W., H. Chuang and M. Schuster (2014), ‘Human Effort and Machine Learnability in Computer Aided Translation’, in Proceedings of the Empirical Methods in Natural Language Processing Conference. Available online: https://nlp. stanford.edu/pubs/green+wang+chuang+heer+schuster+manning_emnlp14.pdf (accessed 31 December 2018).

384

The Bloomsbury Companion to Language Industry Studies

Green, S., J. Heer and C. D. Manning (2013), ‘The Efficacy of Human Post-Editing for Language Translation’, in CHI: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, 439–48. House, J. (1997), Translation Quality Assessment: A Model Revisited. Tübingen: Narr. Hutchins, J. and H. Somers (1992), An Introduction to Machine Translation. Cambridge: Harcourt Brace Jovanovich. Hvelplund, K. T. (2016), ‘Cognitive Efficiency in Translation’, in R. Muñoz Martín (ed.), Reembedding Translation Process Research, 149–70, Amsterdam/Philadelphia: John Benjamins. Jiménez-Crespo, M. (2017), Crowdsourcing and Online Collaborative Translations: Expanding the Limits of Translation Studies, Amsterdam/Philadelphia: John Benjamins. Koehn, P., H. Hoang, A. Birch, C. Callison-Burch, M. Federico, N. Bertoldi, B. Cowan, W. Wade, W. Shen, C. Moran, R. Zens, C. Dyer, O. Bojar, A. Constantin and E. Herbst (2007), ‘Moses: Open Source Toolkit for Statistical Machine Translation’, in Annual Meeting of the Association for Computational Linguistics (ACL), Demonstration Session, Prague, Czech Republic. Available online: http://aclweb.org/ anthology/P07-2045 (accessed 31 December 2018). Krings, H. P. (2001), Repairing Texts: Empirical Investigations of Machine Translation Post-editing Process, Kent, OH: The Kent State University Press. Krollmann F. (1971), ‘Linguistic Data Banks and the Technical Translator’, Meta, 16 (1–2): 117–24. Lacruz, I. and G.M. Shreve (2014), ‘Pauses and Cognitive Effort in Post-Editing’, in S. O’Brien, L. W. Balling, M. Carl, M. Simard and L. Specia (eds), Post-editing of Machine Translation: Processes and Applications, 170–99, Newcastle on Tyne: Cambridge Scholars Publishing. Lagoudagi E. (2006), ‘Translation Memories Survey 2006: Users’ Perceptions around TM Use’, Translating and the Computer 28, November 2006, London, Aslib. Available online: https://pdfs.semanticscholar.org/6c55/2454a3368e08cee7dc9a5fb3a a441a79db35.pdf (accessed 31 December 2018). Landsbergen J. (1989), ‘The Rosetta Project’, Machine Translation Summit 1989. Munich, Germany. Langlais, P. and G. Lapalme (2002), ‘Trans Type: Development-Evaluation Cycles to Boost Translator’s Productivity’, Machine Translation, 17 (2): 77–98. Lippmann, E. O. (1971), ‘An Approach to Computer-Aided Translation’, IEEE Transactions on Engineering Writing and Speech, 14: 10–33. Lommel, A., ed. (2014), Multidimensional Quality Metrics (MQM) Definition’. Available online: http://www.qt21.eu/mqm-definition/definition-2014-08-19.html (accessed 30 August 2017). Moorkens J., S. O’Brien I. A. L. da Silva, N. B. de Lima Fonseca and F. Alves (2015), ‘Correlations of Perceived Post-editing Effort with Measurements of Actual Effort’, Machine Translation, 29: 267–84.

Advances in Interactive Translation Technology

385

Moorkens, J. and S. O’Brien (2017), ‘Assessing User Interface Needs of Post-editors of Machine Translation’, in D. Kenny (ed.), Human Issues in Translation Technology, 109–30, Oxford: Routledge. Macklovitch, E. (2001), ‘The New Paradigm in NLP and Its Impact on Translation Automation’, Invited talk presented at the workshop on The Impact of New Technology on Terminology Management, Glendon College, Toronto. Available online: http://rali.iro.umontreal.ca/rali/sites/default/files/publis/macklovi-sympproceed.pdf (accessed 31 December 2018). Macklovitch E. (2006), ‘TransType2: The Last Word’, in Proceedings of LREC 2006, Genoa, Italy. Available online: http://www.mt-archive.info/LREC-2006-Macklovitch. pdf (accessed 31 December 2018). Mitamura, T. and E. Nyberg (2001), ‘Automatic Rewriting for Controlled Language Translation’, in Proceedings of NLPRS–2001. Workshop on Automatic Paraphrasing: Theory and Application. Available online: https://www.lti.cs.cmu.edu/Research/Kant/ PDF/cl-rewrite.pdf (accessed 31 December 2018). Melby, A., M. Smith and J. Peterson (1980), ‘ITS: Interactive Translation System’, in COLING ‘80 Proceedings of the 8th conference on Computational Linguistics, 424–9. Available online: http://dl.acm.org/citation.cfm?id=990251 (accessed 31 December 2018). Monti, J. (2012), ‘Translators’ Knowledge in the Cloud’, in The New Translation Technologies International Symposium on Language and Communication: Research Trends and Challenges (ISLC), 789–99. Och, F. J. (2003), ‘Minimum Error Rate Training in Statistical Machine Translation’, Proceedings of the 41st Annual Meeting of the Association for Computational Linguistics, Sapporo, Japan, 160–7. Parra Escartín, C. and M. Arcedillo (2015), ‘Living on the Edge: Productivity Gain Thresholds in Machine Translation Evaluation Metrics’, MT Summit XV, 46–56, 30 October–3 November 2015, Miami, Florida. Available online: https://amtaweb. org/wp-content/uploads/2015/10/MTSummitXV_WPTP4Proceedings.pdf (accessed 31 December 2018). Papineni, K., S. Roukos, T. Ward and W. J. Zhu (2002), ‘BLEU: A Method for Automatic Evaluation of Machine Translation’, in ACL-2002: 40th Annual Meeting of the Association for Computational Linguistics, 311–18. Available online: https://www. aclweb.org/anthology/P02-1040.pdf (accessed 31 December 2018). Penet, J. C. (2018), ‘Review of Jiménez-Crespo, Miguel A. Crowdsourcing and Online Collaborative Translations’, Journal of Specialised Translation, 29, 276–7. Available online: http://www.jostrans.org/issue29/issue29_toc.php (accessed 31 December 2018). Peris, I., M. Domingo and F. Casacuberta (2017), ‘Interactive Neural Machine Translation’, Computer Speech and Language, 45 (C): 201–20. Pym, A. (2011), ‘What Technology Does to Translating’, Translation & Interpreting, 3 (1): 1–9.

386

The Bloomsbury Companion to Language Industry Studies

Schäffner, C. (1997), ‘From Good to Functionally Appropriate: Assessing Translation Quality’, Current Issues in Language and Society, 4 (1): 1–5. Schulz J. (1971), ‘Le système TEAM’, Meta, 16 (1–2): 125–31. Snell, B. M. (1978), ‘Introduction’, in V. Lawson (ed.), Tools for the Trade: Translating and the Computer 5, 10–11, Proceedings of a Conference, November 1983. Aslib: London. Available online: http://www.mt-archive.info/70/Aslib-1978-Snell.pdf (accessed 31 December 2018). Snover, M., B. Dorr, R. Schwartz, L. Micciulla and J. Makhoul (2006), ‘A Study of Translation Edit Rate with Targeted Human Annotation’, in Proceedings of the 7th Biennial Conference of the Association for Machine Translation in the Americas (AMTA-2006), Cambridge, MA. Available online: https://www.cs.umd.edu/~snover/ pub/amta06/ter_amta.pdf (accessed 31 December 2018). Specia, L. (2011), ‘Exploiting Objective Annotations for Measuring Translation Post-editing Effort’, in M. L. Forcada, H. Depraetere and V. Vandeghinste (eds), Proceedings of the 15th Conference of the European Association for Machine Translation, 73–80, Leuven. Available online: http://www.mt-archive.info/EAMT2011-Specia.pdf (accessed 31 December 2018). Tillmann, C., S. Vogel, H. Ney, A. Zubiaga and H. Sawaf (1997), ‘Accelerated DP Based Search for Statistical Translation’, in Fifth European Conference on Speech Communication and Technology, 2667–70, Rhodos, Greece. Torres Domínguez, R. (2012), ‘The 2012 Use of Translation Technologies Survey’. Available online: http://mozgorilla.com/download/19/ (accessed 31 December 2018). Toral, A. and V. M. Sánchez-Cartagena (2017), ‘A Multifaceted Evaluation of Neural versus Phrase-Based Machine Translation for 9 Language Directions’, in Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics: Volume 1, Long Papers, 1063–73, Valencia, Spain. Available online: http://www.aclweb.org/anthology/E17-1100 (accessed 31 December 2018). Vaumoron, J. A. (1998), ‘An Ideal Workstation? Perspective of the In-House Translator’, in M. Vasconcellos (ed.), Technology as Translation Strategy, 36–41, Amsterdam/Philadelphia: John Benjamins. Weaver, A. (1988), ‘Two Aspects of Interactive Machine Translation’, in M. Vasconcellos (ed.), Technology as Translation Strategy, 116–24, Amsterdam/ Philadelphia: John Benjamins. Zaretskaya, A., G. Corpas Pastor and M. Seghiri (2015), ‘Translators’ Requirements for Translation Technologies: A User Survey’, in Proceedings of the AIET17 Conference, New Horizons in Translation and Interpreting Studies, Malaga.

A–Z key terms and concepts Erik Angelone, Maureen Ehrensberger-Dow and Gary Massey

1.  Adaptive expertise A form of expertise that is marked by relatively flexible problem-solving heuristics, an ability to innovate task solutions across different (yet still related) task domains and a general capacity to excel in situations involving novelty and ambiguity (see Hatano and Inagaki 1986). Whereas routinized expertise (see Section 25 in this chapter) is enhanced through deliberate practice (see Section 7 in this chapter) in a narrowly defined domain (see Shreve, this volume), adaptive expertise acquisition is dependent on exposure to task repertoires that are not static or stagnant, where variety is in place to enhance cognitive flexibility. Paletz et al. (2013) highlight the fact that adaptive expertise comes to fruition when efficiency in the task is balanced with an ability to innovate. Research on adaptive expertise in the language industry is still relatively new.

2.  Automatic evaluation metrics Widely used to assess the quality of raw machine translation (MT) output, automatic evaluation metrics are a cost-efficient alternative to evaluations by bilinguals. Edit distance metrics, first developed for automatic speech recognition, measure how many of the target words in the MT output are correct and in the right position compared with a reference translation done by a professional. The so-called translation edit rate (TER) indicates the number of actions required to post-edit the MT output, which might be most meaningful for translators. Metrics more commonly used by MT developers are known simply as BLEU (from ‘bilingual evaluation understudy’) and NIST (from ‘National Institute of Standards and Technology’), which calculate scores for segments based on overlaps in words and word sequences (otherwise referred to as n-gram co-occurrences) and then average the scores over a corpus to obtain a final score.

388

The Bloomsbury Companion to Language Industry Studies

Although high correlations with human evaluations have been reported at the corpus level, the results are less convincing at the sentence level (see Way 2018 or A. Way, this volume). Recent additions to the inventory of automatic metrics that are claimed to overcome some of these limitations are METEOR (i.e. Metric for Evaluation of Translation with Explicit ORdering) and LEPOR (i.e. Length Penalty, Precision, n-gram Position difference Penalty and Recall).

3. Barrier-free In the context of communication and translation, this term is used synonymously with ‘accessible’. It is derived from the concept of universal (or ‘barrier-free’) design to render environments and products accessible, understandable and usable, to the greatest possible extent, by all people, irrespective of their age or (dis)abilities. Barrier-free (or accessible) communication is a relatively new and multifaceted research area. Against the background of the UN Convention on the Rights of Persons with Disabilities1 and operating within the paradigm of social inclusion and participation, it explores models, methods and procedures that ensure access to information and training primarily for people who have visual, hearing or (temporary) cognitive impairments. The most common translation-related forms of barrier-free communication are sign language interpreting, translations into easy-to-read language and modes of audiovisual translation, such as subtitling for the deaf and hard of hearing, enhanced subtitles, audio description, audio introductions and audio subtitles (see Jankowska, this volume).

4.  CAT tools Computer-aided translation (CAT) has been part of the translation process at least since the introduction of personal computers, but ‘CAT tools’ refer to computer programs that are specifically designed to help translators (see van der Meer, this volume). They do this by segmenting the source text, usually into sentences, and presenting each segment to be translated separately. The translation of each segment is paired with the source segment and saved in a database called a translation memory (TM). Once saved, these translations can then be reused with matching source segments in other parts of the same text or in other source texts. Most CAT tools can also be set to offer so-called ‘fuzzy match’ suggestions for less-than-perfect matches. Additional functions include

A–Z Key Terms and Concepts

389

automatic look-up in terminology databases as well as concordance, alignment, quality checking and post-production tools.

5. Competence Among the numerous and diverse definitions of competence that exist, one of the most useful and pertinent to present-day translation can be found in the 2017 Competence Framework (EMT Board 2017: 3) developed by the European Master’s in Translation (EMT) network of the European Commission’s Directorate-General for Translation (DGT). There, competence ‘means the proven ability to use knowledge, skills and personal, social and/ or methodological abilities, in work or study situations and in professional and personal development’, with knowledge being ‘the body of facts, principles, theories and practices that is related to a field of work or study’ and skills meaning ‘the ability to apply knowledge and use know-how to complete tasks and solve problems’. Various models of translation competence have been proposed, usually comprising an interacting, strategically deployed set of sub-competences. Cases in point are the 2017 EMT Framework itself or the well-known, empirically validated PACTE (2003; see also Hurtado Albir 2017) model, in which a central strategic sub-competence, underpinned by­ psychophysiological components, marshals bilingual, extra-linguistic, translationknowledge and instrumental sub-competences.

6.  Controlled language A controlled language is a simplified version of a natural language. It differs from a full natural language in that it is governed by more restrictive grammatical rules and contains fewer words. Authors using a controlled language therefore have fewer grammatical and lexical choices available when writing a text. Typically used to author technical texts to avoid lexical ambiguity and complex grammatical structures, controlled languages are specific to a particular domain or organization. Because of their restricted grammar and vocabulary range, they not only make it easier for the user to read and understand a text but also facilitate the use of translation technologies such as TM or MT systems (see Guerberof Arenas, this volume). Documents authored in controlled languages are grammatically and lexically consistent and stylistically simple, making them easier and cheaper to translate.

390

The Bloomsbury Companion to Language Industry Studies

7.  Deliberate practice Deliberate practice is a type and scope of practice that is required for expertise acquisition, with the ultimate objective of reaching a level of consistently superior performance (Ericsson and Charness 1997). In order for practice to be deliberate, the following five conditions need to be in place: (a) motivation and support, (b) well-defined tasks, (c) appropriate difficulty, (d) informative feedback and (e) opportunities for repetition and for the correction of errors (see Shreve, this volume). To date, these conditions have been more readily met in academic settings than in language industry settings. If these conditions are not met, language industry professionals may not advance along an expertise trajectory and, instead, their performance runs the risk of plateauing.

8.  Ergonomics (cognitive, physical, organizational) According to the International Ergonomics Association (IEA),2 ergonomics is ‘the scientific discipline concerned with the understanding of interactions among humans and other elements of a system’. Ergonomics-oriented research in the language industry explores the impact of factors and actors on work routines, performance and decision-making (Ehrensberger-Dow 2017), with the aim of documenting limitations and optimizing capabilities. To date, language industry research on workplace ergonomics has focused on the following three areas: (a) cognitive ergonomics, which concerns mental processes, (b) physical ergonomics, which concerns human interaction with tools and artefacts, and (c) organizational ergonomics, which concerns interaction with other stakeholders.

9.  English as a lingua franca Commonly referred to by its acronym (ELF), the phenomenon of English being used as the means of communication by non-native and native speakers both in international and in national contexts across diverse domains has been identified as a defining feature of the twenty-first century (e.g. Seidlhofer 2011). Increasingly, interpreters and translators with English as one of their working languages are confronted with spoken and written texts produced by nonnative users of English. Dealing with non-standard use of English can present additional challenges to the already complex task of working between languages. For translators and interpreters, this can increase demands on concentration and

A–Z Key Terms and Concepts

391

cognitive load, with potentially detrimental effects on quality (see Albl-Mikasa and Ehrensberger-Dow 2019 or Albl-Mikasa, this volume). For MT engines, the unexpected structures and collocations often found in ELF may result in raw output that requires a heavier investment in post-editing to bring it up to the required level of quality.

10.  Fansubbing and fandubbing Fansubbing and fandubbing are two forms of non-professional translation performed by fans of audiovisual productions (see Díaz-Cintas, this volume). Fansubbing is subtitling done by fans for fans and is normally distributed free of charge over the internet (Díaz-Cintas and Muñoz-Sánchez 2006). Similarly, fandubbing involves amateur enthusiasts, rather than professional actors, dubbing foreign-language film or television productions into a target language. As DíazCintas (this volume) points out, studying the activities of non-professional fan translation communities and exploring, for instance, their effects on audience reception (see Orrego-Carmona 2016) can potentially benefit the language industry by providing insights into viewing habits and competitive new working practices in what is a fast-evolving media landscape.

11.  Human–computer interaction Studies devoted to human–computer interaction (HCI) describe how people use or interact with computers, usually with the aim of optimizing the latter’s easeof-use. Considered by many to be almost synonymous with human factors or cognitive ergonomics (see Section 8 in this chapter), HCI focuses on computerbased tools rather than on other types of machines. HCI is thus highly relevant to the modern translation workplace, with its increasing reliance on CAT tools and other types of language technology (see O’Brien 2012). Rather than being limited to the design of the graphical user interface (GUI), HCI also considers user experience with navigation, scrolling, mouse clicking, keyboarding and other input and output devices.

12.  Intercultural mediation Intercultural mediation has always played an important role in integrating immigrants into host countries and has received more attention in translation

392

The Bloomsbury Companion to Language Industry Studies

studies since the cultural turn of the 1980s. The first research report of the European Commission-funded project ‘Train Intercultural Mediators for a Multicultural Europe’ (TIME)3 illustrated how intercultural mediation is understood by mapping the diversity of labels and definitions. Common to most of them is the notion of interpretation. Katan (2013) points out that intercultural mediation can be understood as ensuring successful communication across cultures and as supporting vulnerable groups by ensuring that their voices are heard. The former is a classic description of translation and interpreting but the notion of intervention in the latter pushes some professionals’ understanding of their roles and responsibilities (see Schäffner or Albl-Mikasa, this volume).

13. Journalation Also referred to as news translation and transediting, journalation describes the work undertaken by journalistic text producers when creating content by drawing on material available in other languages. In broad terms, journalation can be regarded as a form of translation for print and/or online mass media (Schäffner 2012: 867). As a focal point of language industry research, journalation is informed by both translation studies and communication studies. As an activity largely undertaken by bilingual journalists with no formal training in translation, journalation can also be regarded as an emerging form of nonprofessional translation and interpreting (NPIT; see Angelelli, this volume). To date, empirical research on journalation has primarily been corpus-based and ethnographic in nature.

14.  Language brokering In broad terms, language brokering involves non-professional (in the sense of not being the result of formal training) cross-language and cross-cultural mediation undertaken largely by children for family members and others in the community. Language brokers play an important role in facilitating communication in educational, healthcare, business and legal settings. The mediation that language brokers undertake is often ad hoc and manifest in situations involving crises or emergencies. Research on language brokers and other so-called non-professional interpreters and translators is needed to document how their workplaces and working realities vary from those of so-called professional interpreters and translators (see Angelelli, this volume).

A–Z Key Terms and Concepts

393

15.  Language service providers (LSP) As defined by the Globalization and Localization Association (GALA), language service providers ‘adapt products and services for consumption in multilingual markets’.4 Services offered include, among many others, various forms of translation, interpreting, localization and consulting. Language service provision can be multilingual or monolingual. It is often project management-driven, with a potentially broad range of stakeholders involved. Successful language service design caters to the needs of both the client and the end users and can be regarded as fit for purpose. While artificial intelligence is likely to yield an increase in automated language service provision, there is also an anticipated need for more tailorized provision (see Koskinen, this volume), catering to the unique needs of clients.

16. Localization An integral part of the globalization process for products, language localization (L10N) is the second phase of adapting texts or products to meet specific communication needs of a particular locale after translation into multiple languages takes place in the internationalization phase. It is usually referred to in the context of cultural adaptation and translation of software, websites, video games and other multimodal content. In addition to local norms of grammar, punctuation and spelling, localization can involve reformatting dates, times, addresses and phone numbers, providing prices in local currencies, adapting graphics and cultural references as well as adjusting links to URLs that will be more useful to the target audience. Although the boundaries between translation and localization are not always clear-cut, localization can be considered the fine-tuning of a translation for a particular target audience at a particular time and place. Various specifically designed tools exist to support globalization and localization processes (see van der Meer, this volume).

17.  Machine learning An accessible explanation from Coursera is as follows: ‘Machine learning is the science of getting computers to act without being explicitly programmed.’5 The basic premise is for computers to predict an output from input data based on

394

The Bloomsbury Companion to Language Industry Studies

algorithms and statistical analysis and then to update that output when new data become available. The algorithms can be supervised (i.e. by a data scientist who provides both input and feedback) or unsupervised (i.e. by using deep learning to review data and draw conclusions). Unsupervised learning algorithms, also known as neural networks, depend on massive amounts of training data and were very important in the development of the newest generation of machine translation engines (see Section 19 in this chapter).

18.  Multimodal translation Multimodality can be defined as the representation of information using more than one mode of communication (written language, spoken languages, gestures, sounds, visual images, etc.). Many of the documents or products that are translated today are multimodal, consisting of two or more interrelated modes. This is most apparent in website and social media translation and in the various forms of accessible (see Section 3 in this chapter) and audiovisual translation (see Díaz-Cintas or Jankowska, this volume). In multimodal products, such as websites, films, comics and illustrated texts, spoken and/or written language interacts with other visual and/or auditory information, contributing to the product’s meaning and influencing how it is received and interpreted by its audience. To translate these products, careful consideration needs to be given to all the modes used and to the ways they combine to make meaning.

19.  Neural MT Rather than a completely new approach, neural machine translation (NMT) is data-driven and trained on huge corpora of source-target segment pairs just as statistical MT is. However, NMT uses machine learning techniques and neural networks to predict the likelihood of sequences of words, generally the length of sentences (see A. Way or Carl and Planas, this volume). The main difference from statistical MT approaches is the reliance in NMT on numerical vectors to represent word distributions and their contexts (i.e. ‘embeddings’) instead of on simple probabilities of co-occurrence. The encoder part of the NMT system builds up representations of each of the source text words in context while the decoder provides the most likely word at each position as the target text is being produced, reminiscent of autocomplete features of web browsers or

A–Z Key Terms and Concepts

395

word prediction features of smartphones (see Forcada 2017 for a more detailed explanation, intended specifically to meet translators’ information needs).

20.  Observational methods Observational methods are research methods, both quantitative and qualitative, which do not involve direct manipulation of an independent variable or make use of participant randomization (see Mellinger, this volume). Observational methods often involve participant observation on location in the natural workplaces of language industry professionals (see Risku, Rogl and Milošević this volume). Observations are documented in the form of field notes, which can be triangulated with other forms of data collection, such as interviews and questionnaires. Technological advancement has also enabled remote observation. Both on-location and remote observation methods can involve interaction between the observer and the observed, in which case mutual trust is paramount. A primary advantage of observational methods is a greater likelihood that the researcher preserves ecological validity.

21.  Product-oriented and process-oriented research In the context of translation studies, product-oriented research considers the intermediate drafts and final texts that are produced as a result of the process of translation, while process-oriented research focuses on the process by which those products come about. Product-oriented research has tended to focus on issues like translation quality and its assessment, translations as historical, sociological and cultural artefacts, comparative lexis, phraseology, genres and text types in source and target languages as well as universal features shared by translations (as opposed to non-translated texts). Recent advances in digital technologies have given researchers access to ever-larger corpora with increasingly complex architectures that enable multilevel corpus queries (see C. Way, this volume). Process-oriented research, on the other hand, uses methods such as annotated translations, journals, think-aloud and retrospective verbalizations, interviews and questionnaires, keystroke logging, screen recording, eye-tracking and direct observation of workplace practices, often in combination with one another, in order to investigate the cognitive processes of translators and the way they interact with their technological and

396

The Bloomsbury Companion to Language Industry Studies

social environments as they work. A major goal of process-oriented translation research is to ascertain how and why translators with varying degrees of experience perform the way they do, and to explore the nature and constituents of translation competence and expertise.

22.  Professional ethics The practitioners of many professions, such as law and medicine, are held to account by codes of practice and ethical standards. Although codes also exist for interpreters and translators (see Albl-Mikasa or Schäffner, this volume), they are advisory or educational rather than regulatory, except for those who are voluntarily members of professional associations. Codes of professional ethics differ depending on the country, but they generally include sections on competence, confidentiality, conduct, conflict of interest, continuing professional development, solidarity with peers, accuracy, fidelity and completeness (see McDonough Dolmaya 2011). Within translation studies, the fear has been expressed that codes of professional ethics can be misused to propagate a misleading image of translators and interpreters as neutral conduits rather than as empowered language experts (see Lambert 2018).

23.  Project managers The role of a project manager is to coordinate and control the translation workflow through the entire lifecycle of the project (see Section 30 in this chapter) by effectively allocating resources, communicating clearly and anticipating potential problems. In the pre-production stage, the project manager assesses the client’s and the project’s requirements in order to ensure that the human, software and digital resources are available when needed. During the production phase, the project manager monitors the translating, quality checking and post-formatting activities and supplies any support necessary. Post-production, the project manager is responsible for delivering the translation, reacting to feedback and optimizing leverage, for example, by ensuring that translation memories have been updated. Although project managers do not have to be trained translators, familiarity with translation processes and translation technology is definitely an asset (see van der Meer, this volume).

A–Z Key Terms and Concepts

397

24. Respeaking A process whereby a trained professional respeaker ‘listens to the original sound of a (live) program or event and respeaks it, including punctuation marks and some specific features for the deaf and hard of hearing audience, to a speechrecognition software, which turns the recognized utterances into subtitles displayed on the screen’ (Romero-Fresco 2011: 1; see Jankowska, this volume). Although respeaking can involve two professionals – one to speak and one to quickly correct subtitles before or during broadcast – the accuracy attained by speech recognition software in some languages means that the corrector might be omitted (Lambourne 2006; see Jankowska, this volume), with corrections sometimes being left to the speakers themselves. Respeaking can produce subtitles for the deaf and hard of hearing at a rate of approximately 140–160 words per minute (Lambourne 2006; see Jankowska, this volume).

25.  Routinized expertise A form of expertise marked by nuanced problem understanding and efficacious problem-solving strategies, proceduralized routines and task awareness within a relatively narrowly defined domain (see Shreve, this volume). Routinized expertise stems from deliberate practice in the given domain over an extended period of time (cf. the 10-year/10,000-hour metric; Ericsson, Krampe and Tesch-Roemer 1993). To date, this is the most widely explored form of expertise in language industry research, with data largely derived from experimental lab settings. While language industry professionals in possession of routinized expertise tend to thrive in contexts involving well-defined tasks, they often encounter greater difficulty when engaging in tasks that are ill-defined or novel. It is in such situations where adaptive expertise (see Section 1 in this chapter) is more beneficial.

26.  Rule-based MT Rule-based machine translation (RBMT) is a type of machine translation that relies on a complex linguistic rule set, driven largely by morphological and semantic analysis and syntactic parsing, to generate target-language content. The success of RBMT depends on language modelling parameters and wellformed language constructs (see A. Way or Carl and Planas, this volume).

398

The Bloomsbury Companion to Language Industry Studies

While RBMT systems demand less data than statistical machine translation systems, the results generated by RBMT often lack fluency in situations involving deviation from strict language rules, as is often the case by default in naturally occurring language. Rule-based machine translation does not depend on bilingual content input, but rather on well-defined rules in both the source and target languages.

27. Self-concept Self-concept is a general term designating how people think about, evaluate and perceive themselves; it is the image they have of themselves. In relation to translation and translators, Kiraly (1995: 100) defines self-concept more specifically as ‘a mental construct that serves as the interface between the translator’s social and psychological worlds’ which includes a ‘sense of the purpose of the translation, an awareness of the information requirements of the translation task, a self-evaluation of [one’s own] capability to fulfil the task and a related capacity to monitor and evaluate translation products for adequacy and appropriateness’. As such, self-concept has been explicitly or implicitly regarded as a central element in numerous models of translation competence (e.g. Göpferich 2009; Kiraly 1995; PACTE 2003). The term is extensible, as demonstrated by Catherine Way’s (this volume) broader definition of it, itself based on Bolaños-Medina’s (2016) recent perspective, as the way in which translators see themselves in society and how they engage with other agents when providing translation services.

28.  Statistical MT Statistical machine translation (SMT) generates output based on probabilities derived from both bilingual (i.e. aligned) and monolingual content (see A. Way or Carl and Planas, this volume). Parallel corpora, such as translation memories and bi-/multilingual glossaries, train SMT systems to learn language patterns and to optimize proposals. Monolingual data, derived, for example, from comparable corpora in the target language, are used to improve fluency. Unlike RBMT (see Section 26 in this chapter), SMT does not rely on controlled authoring and/or strict linguistic rules and can therefore produce more usable results in instances of ill-formed language use. SMT generally calls for vast amounts of data for

A–Z Key Terms and Concepts

399

purposes of training, which is facilitated by ready access to multilingual content on the internet, as created by companies and private users.

29. Transcreation Most closely associated with the domains of marketing and advertising, transcreation is a practice in which a translator is expected to ensure that the message in the target language has the right cultural references to recreate the impact of the original. Many practitioners and translation studies scholars have objected to the use of this relatively new portmanteau of ‘translation’ and ‘creation’ for a phenomenon that fits comfortably within their understanding of translation, adaptation and/or localization. For others, however, transcreation involves a much more creative process of writing content from scratch with much less reliance on the linguistic realizations in the source text. The concept has been taken up by scholars interested in textual and visual genres beyond commercial translation (see Spinzi, Rizzi and Zummo 2018).

30.  Translation cycle This term is synonymous with the more common terms ‘translation process’ and ‘translation workflow’ as used in the European and international translation quality standards EN 15038 (CEN 2006) and ISO 17100 (ISO 2015). It designates the production cycle of a translated document as it passes from the translation stage through the sequence of self-checking by the translator, revision by another translator or professional reviser, review by a domain expert, proofreading, final verification and release for publication (Thelen 2019: 9). Guerberof Arenas (this volume) reports on research showing that the increasingly common integration of machine translation and post-editing of machine-translated output into the translation cycle has boosted productivity without compromising quality.

31. Usability Nielson (2012) defines usability as ‘a qualitative attribute’ involving ‘methods for improving ease-of-use during the design process’. In the context of language service provision, usable content is fit for purpose and is quickly and accurately

400

The Bloomsbury Companion to Language Industry Studies

processed by the designated end user. User-centred translation, in which end users provide feedback during iterative stages of the process rather than only as a form of end-of-process quality assessment, is a model geared towards enhancing usability (see Suojanen, Koskinen and Tuominen 2015). Usability ideally involves both the client and the end user throughout the lifecycle of the project. Methods for assessing usability in the language industry include surveys, interviews and eye-tracking.

Notes 1 https://www.un.org/development/desa/disabilities/convention-on-the-rights-ofpersons-with-disabilities.html 2 International Ergonomics Association, https://www.iea.cc/whats/index.html 3 http://www.mediation-time.eu/images/TIME_O1_Research_Report_v.2016.pdf 4 The Globalization and Localization Association (GALA), https://www.gala-global. org/language-industry-stakeholders 5 https://www.coursera.org/lecture/machine-learning/what-is-machine-learningUjm7v

References Albl-Mikasa, M. and M. Ehrensberger-Dow (2019/forthcoming), ‘ITELF – (E)merging Interests in Interpreting and Translation Studies’, in E. dal Fovo and P. Gentile (eds), Convergence, Contact, Interaction in Translation and Interpreting Studies. An Outlook on Current and Future Developments, Amsterdam: Peter Lang. Bolaños-Medina, A. (2016), ‘Translation Psychology within the Framework of Translator Studies: New Research Perspectives’ in C. Martín de León and V. González-Ruíz (eds), From the Lab to the Classroom and Back Again. Perspectives on Translation and Interpreting Training, 59–100, Frankfurt: Peter Lang. CEN (2006), Translation Services: Service Requirements. EN 15038, Brussels: CEN. Díaz-Cintas, J. and P. Muñoz-Sánchez (2006), ‘Fansubs: Audiovisual Translation in an Amateur Environment’, Journal of Specialised Translation, 6: 37–52. Ehrensberger-Dow, M. (2017), ‘An Ergonomic Perspective of Translation’, in J. Schwieter and A. Ferreira (eds), The Handbook of Translation and Cognition, 332–49, London: Wiley-Blackwell. EMT Board (2017), European Master’s in Translation: Competence Framework 2017. Available online: https://ec.europa.eu/info/sites/info/files/emt_competence_ fwk_2017_en_web.pdf (accessed 30 December 2018).

A–Z Key Terms and Concepts

401

Ericsson, K. A. and N. Charness (1997), ‘Cognitive and Developmental Factors in Expert Performance’, in P. Feltovich, K. M. Ford and R. R. Hoffman (eds), Expertise in Context: Human and Machine, 3–41, Cambridge, MA: MIT Press. Ericsson, K. A., R. Krampe and C. Tesch-Roemer (1993), ‘The Role of Deliberate Practice in the Acquisition of Expert Performance’, Psychological Review, 100: 363–406. Forcada, M. (2017), ‘Making Sense of Neural Machine Translation’, Translation Spaces, 6 (2): 291–309. Göpferich, S. (2009), ‘Towards a Model of Translational Competence and Its Acquisition: The Longitudinal Study TransComp’, in S. Göpferich, A. L. Jakobsen and I. M. Mees (eds), Behind the Mind. Methods, Models and Results in Translation Process Research, 11–37, Copenhagen: Samfundslitteratur. Hatano, G. and K. Inagaki (1986), ‘Two Courses of Expertise’, in H. Stevenson, H. Azuma, and K. Hakuta (eds), Child Development and Education in Japan, 262–72, New York: Freeman. Hurtado Albir, A., ed (2017), Researching Translation Competence by PACTE Group, Amsterdam: John Benjamins. ISO (2015), Translation Services: Requirements for Translation Services. ISO 17100:2015, Geneva: ISO. Katan, D. (2013), ‘Intercultural Mediation’, in Y. Gambier and L. Van Doorslaer (eds), The Handbook of Translation Studies, 84–91, Amsterdam: John Benjamins. Kiraly, D. (1995), Pathways to Translation. Pedagogy and Process, Kent, OH: Kent State University Press. Lambert, J. (2018), ‘How Ethical Are Codes of Ethics? Using Illusions of Neutrality to Sell Translations’, Journal of Specialised Translation, 30: 269–90. Lambourne, A. (2006), ‘Subtitle Respeaking. A New Skill for a New Age’, InTRAlinea. Online Translation Journal. Special Issue: Respeaking. Available online: http://www. intralinea.org/specials/article/Subtitle_respeaking (accessed 20 September 2018). McDonough Dolmaya, J. (2011), ‘Moral Ambiguity: Some Shortcomings of Professional Codes of Ethics for Translators’, Journal of Specialised Translation, 15: 28–49. Nielson, J. (2012), ‘Usability 101: Introduction to Usability’, The Nielson Norman Group. Available online: https://www.nngroup.com/articles/usability-101introduction-to-usability/ [accessed 31 December 2018]. O’Brien, S. (2012), ‘Translation as Human-Computer Interaction’, Translation Spaces, 1: 101–22. Orrego-Carmona, D. (2016), ‘A Reception Study on Non-professional Subtitling: Do Audiences Notice Any Difference?’, Across Languages and Cultures, 17 (2): 163–81. PACTE (2003), ‘Building a Translation Competence Model’, in F. Alves (ed), Triangulating Translation: Perspectives in Process oriented Research, 43–66, Amsterdam: John Benjamins.

402

The Bloomsbury Companion to Language Industry Studies

Paletz, S., K. Kim, C. Schunn, I. Tollinger and A. Vera (2013), ‘Reuse and Recycle: The Development of Adaptive Expertise, Routine Expertise, and Novelty in a Large Research Team’, Applied Cognitive Psychology, 27 (4): 415–28. Romero-Fresco, P. (2011), Subtitling through Speech Recognition: Respeaking, London: Routledge. Schäffner, C. (2012), ‘Rethinking Transediting’, Meta, 57 (4): 865–83. Seidlhofer, B. (2011), Understanding English as a Lingua Franca, Oxford: Oxford University Press. Spinzi, C., A. Rizzo and M. L. Zummo, eds (2018), Translation or Transcreation? Discourses, Texts and Visuals, Newcastle upon Tyne: Cambridge Scholars Publishing. Suojanen, T., K. Koskinen and T. Tuominen (2015), User-Centered Translation. Translation Practices Explained, London: Routledge. Thelen, M. (2019), ‘Quality and Quality Assessment: Paradigms in Perspective’, in E. Huertas Barros, S. Vandepitte and E. Iglesias Fernández (eds), Quality Assurance and Assessment Practices in Translation and Interpreting. Advances in Linguistics and Communication Studies Series, 1–25, Hershey: IGI Global. Way, A. (2018), ‘Quality Expectations of Machine Translation’, in J. Moorkens, S. Castilho, F. Gaspari and S. Doherty (eds), Translation Quality Assessment: From Principles to Practice, 159−78, Berlin: Springer.

Index accessibility  5, 9–12, 17, 30, 79, 222–3, 231–47, 388 accuracy  236, 242, 266, 291, 299, 304, 318, 323, 338, 396–7 AD, see audio description adaptive expertise  158–61, 163–7, 172, 387 adaptive experts  2, 159, 165 adaptive translator  164 adequacy  186, 318, 338, 376, 398 agency  6–7, 37, 41, 80, 91, 93–4, 102, 106, 124 agency translators  75, 368 APE, see automatic post-editing assessment  4, 18, 94, 117, 155, 162, 168–71, 179–80, 184–5, 189, 196, 304, 306, 375–7, 395 assessment metrics  8, 155, 161–2, 320 audio description  9, 79, 81, 210–11, 232–3, 240–1, 245, 388 audiovisual media  209, 211, 213, 216, 218 audiovisual media accessibility  9, 10, 231–46 audiovisual translation  9, 16, 65, 79, 143, 209–25, 231, 239–40, 243–6, 388, 394 automatic evaluation  305, 317–18 automatic evaluation metrics  312, 319, 325, 343, 387 automatic PE, see automatic post-editing automatic post-editing  318, 334–5 automatic term extraction  10, 268, 273, 277–8 automatic term extractor, see automatic term extraction AVT, see audiovisual translation barrier-free  5, 238, 388 barrier-less, see barrier-free

CAT tool  ix, 3, 9, 48, 67, 80–1, 147, 225, 312, 346, 349, 362, 371, 374, 388, 391 code of professional ethics  104 codes of conduct  6, 65–6 codes of ethics  7, 68, 91, 95–6, 102, 104, 128 codes of practice  396 cognitive effort  8, 12, 189, 223, 240, 340, 342, 347–8, 377–8 cognitive ergonomics  20, 42, 390–1 cognitive load  91, 109, 305, 368, 391 community interpreting  7, 49, 73, 91–4, 97, 99–109, 119, 128, 168 community translation  73, 289, 291–2 competence  6–9, 47, 68–70, 72, 75–6, 79, 82, 91, 95–7, 99, 102, 106, 109, 128, 141, 144, 149, 155, 163, 166, 182, 184, 187, 189–91, 196, 242, 262, 389, 396 competence model  6, 9, 68–70, 119, 126 computer-aided translation  269, 342, 361, 388 controlled authoring  265, 269, 274, 278, 290, 292, 342, 398 controlled language  292, 333–4, 345, 348, 365, 389 creativity  46, 161, 363 crowd-based interactive MT  272, 366, 368, 373 curriculum  70, 125–6, 129, 179–80, 182, 219 data ownership  70, 301 data security  28, 70 decision-making  39, 46, 70, 74, 95–7, 102, 191, 267, 277, 390 decoder  316–17, 323, 372, 394 deep learning  63, 304, 323–5, 394 deliberate practice  8, 154, 156–8, 160–1, 167, 170–6, 387, 390, 397 dubbing  9, 18, 210–11, 213–15, 218, 220, 222–3, 234–5, 263, 391

404 ELF, see English as a Lingua Franca embeddedness  40, 273 encoder  316–17, 323, 394 English as a Lingua Franca  7, 93, 106–7, 390–1 ergonomics  6, 19–20, 38, 41, 48, 53, 78, 80, 189, 195–6, 390–1 error coding  376 ethical codes  70, 95 ethical guidelines  92 ethical norms  109 ethical standards  102, 396 ethics  7, 25–6, 67, 72, 91–2, 95–6, 100, 102, 104, 108, 118, 128, 396 ethnographic research  22, 24, 43, 147–8 ethnography, see ethnographic research evaluation metrics  11, 312, 319, 325, 343, 387 expertise  8, 20, 47, 64–5, 75, 79, 96–7, 117–18, 126, 128, 144, 153–74, 187, 189–91, 193, 245–6, 271, 374, 387, 390, 396–7 extended cognition  39–40, 189 eye tracking  23, 74, 143, 148, 188–9, 222–4, 243, 338–40, 395, 400 faithful, see faithfulness faithfulness  66, 70–1, 73, 100 fandubbing  9, 213, 223, 391 fansubbing  9, 188, 213, 223, 291, 391 feedback  11, 20, 98, 143–4, 157–8, 162, 165–7, 172–3, 271, 274, 323, 337, 363, 366, 369–70, 372, 379, 390, 394, 396, 400 fidelity  66, 100, 318, 347, 396 freelance  6, 45–6, 48, 64, 66–7, 75, 102, 117, 128, 154, 162, 166–7, 172, 174, 194, 261, 286, 345, 362 good practice  76, 78, 93, 109 guidelines  5, 42, 65–6, 92, 104, 109, 117, 130, 195, 217, 244–5, 275–6, 333–7, 342–3, 345–6, 348 HCI, see human-computer interaction human-computer interaction  19, 30, 42, 274, 366, 372, 391 human evaluation  311–12, 317–18, 337, 349, 376, 388

Index IMT, see interactive machine translation interactive machine translation  11–12, 363–70, 373–4, 378 interactive MT, see interactive machine translation intercultural mediation  66, 391–2 international English  106 interpreter training  92, 150 ITI code  65, 68 journalation  8, 141, 392 language broker  116, 121, 392 language brokering  121, 129–30, 392 language service provider  ix, 1–2, 8, 16, 27, 37, 77–8, 139, 141–3, 147–50, 162, 165, 169, 172, 174, 193, 262–3, 286–7, 300, 305, 319–20, 338, 343–5, 372–3, 393 language service provision  7–8, 45, 118, 131, 147, 184, 393, 399 language technology  76, 145–6, 185, 391 lifecycle  1, 4, 15, 18, 396, 400 localization  ix, 1, 10, 15–17, 27, 30, 63–4, 76–7, 81, 107, 143–4, 154, 160, 169, 183, 188, 194, 209, 213, 263, 271, 274, 277, 286–95, 306, 333, 336, 338, 343–5, 393, 399 localized content  277 localizer  2, 154, 290 loyalty  71, 94, 98, 291 LSP, see language service provider machine learning  11, 27, 303–5, 307, 321–2, 375–6, 393–4 machine translation  ix, x, 1, 3, 4, 8, 11, 15, 16, 20, 27, 48, 118, 139, 141, 143, 221, 245, 274, 278, 285, 293, 311–26, 333, 361–3, 366, 375, 377–8, 387, 394, 399 mediation  3, 39, 46, 66, 105, 118, 121, 125, 131, 166, 391–2 mentoring  160, 162, 183, 194 ML, see machine learning MT, see machine translation multimodal, see multimodality multimodality  80, 187, 196, 214–15, 218, 240, 245, 393–4 multimodal translation  79, 209, 394

Index neural machine translation  x, 3, 11, 63, 80, 289, 304, 307, 311–13, 316–21, 323–5, 344–5, 348–9, 363, 369, 372, 375, 394 neural MT, see neural machine translation NMT, see neural machine translation non-professional  ix, 2, 5, 9, 115–31, 223–4, 291, 391–2 non-professional interpreting and translating  ix, 7, 115–31, 392 NPIT, see non-professional interpreting and translating observational methods  6, 22, 43–4, 395 organizational ergonomics  6, 40, 390 participant-oriented  23, 26 PE, see post-editing performance assessment  7, 8, 153–7, 169–71 physical ergonomics  6, 38, 390 post-editing  x, 3–4, 11, 48, 70, 76, 80, 117, 128, 154, 161, 188–9, 195, 221, 293–5, 302, 317–18, 320–1, 333–49, 362–3, 365–7, 370, 373–4, 376–8, 391, 399 post-editor  2, 154, 335, 337, 339, 341–4, 346–8, 363, 373, 377 pre-editing  11, 70, 333–49, 362, 365–6 pre-editor  334 process-oriented  16, 22–3, 26, 94, 155, 185, 188, 195, 274, 395–6 process-oriented training  9 production cycle  143, 149 product-oriented  23, 145, 185, 188, 223, 395, 399 professional competence  65, 68, 72, 141, 191 professional ethics  100, 104, 396 professional standards  92, 104 project management  1, 4, 15, 19, 39–40, 107, 142, 147, 154, 165, 172, 218, 291–2, 294, 299, 393 project manager  1, 46–8, 53, 64, 67, 71, 79, 108, 145, 147, 154, 165, 173, 287, 303, 396 public-service interpreting  46, 73, 117, 193

405

QA, see quality assessment quality assessment  140, 179, 184–5, 196, 220, 242, 274, 295, 299, 319–20, 375, 400 quality assurance  1–2, 4, 18, 66, 71, 274, 290, 295 quality evaluation  194, 303, 338 RBMT, see rule-based machine translation reception  9, 73, 79, 142–3, 217, 221–5, 240–3, 245, 277, 365, 391 research methodology, see research methods research methods  6, 16, 18, 22–3, 27, 30, 42, 54, 82, 125, 333, 339, 395 respeaker  213, 236, 395 respeaking  5, 212–13, 220, 236, 242, 397 routine  37, 42, 50, 68, 147, 158–62, 164–6, 171, 223, 295 390, 397 rule-based IMT, see interactive machine translation rule-based machine translation  11, 302, 313–14, 316, 362, 366–7, 369, 397–8 rule-based MT, see rule-based machine translation S2S, see speech to speech self-concept  107–8, 179–80, 186, 398 situated cognition  93–4, 108, 219 SMT, see statistical machine translation sociological approach  37, 40 speech-to-speech  20, 27, 63, 295–6, 307 staff  6, 11, 45, 75, 77, 115, 117, 128, 162, 192–3, 297–8, 301, 321–2 standards of good practice  7, 92, 96, 104 statistical machine translation  221, 270, 295, 304, 311–13, 315–21, 323, 325, 348–9, 362–3, 367–9, 371–3, 375, 394, 398 statistical MT, see statistical machine translation student  x, 2, 8, 20, 51–3, 68, 77–8, 80, 82, 125–7, 148, 180–4, 189, 191–4, 262, 311 sub-competence  68, 190–1, 389 subtitler  212, 219–20, 235, 243, 291

406

Index

subtitles  11, 65, 79, 211–13, 217, 219–24, 232–6, 240, 242–3, 247, 290–1, 388, 397 subtitling  9, 18, 65, 79, 212–13, 215, 218–24, 232, 235, 240–2, 263, 289, 291, 388, 391 team adaptive expertise  160 termbase  261, 263, 268, 272–7, 296–7, 371 terminology management  ix, 4, 10, 15, 261–78, 296 terminology management system  66, 263 TM, see translation memory TMS, see translation management system transcreation  8, 63–4, 76, 81, 141, 188, 399 transcreator  2, 75, 81, 108 translation agency  45, 48, 51, 53, 117, 139, 148, 303, 368 translation competence  68–9, 76, 155, 161, 179, 183–4, 196, 262, 389, 396, 398 translation cycle  342, 344, 399 translation editing  366, 371 translation effort  306, 343, 364, 374, 377 translation management system  290, 292, 297–8, 301 translation memory  4, 39, 66–7, 80, 117, 140, 263, 267, 273–4, 278, 286–7, 290, 293–6, 298–302, 306, 312, 320, 334, 343, 346–7, 362, 365–6, 371, 373, 388–9, 396, 398 translation process  6, 17–18, 20, 27, 47, 51, 67, 73–4, 80, 139, 145, 185–6, 188–9, 219, 264, 299, 304, 334, 339–41, 345, 362, 364, 366–7, 369–70, 378, 388, 396, 399 translation services  8, 22, 45, 79, 117, 139–50, 186, 286, 305, 324, 372, 398

translation technology  10, 12, 25, 63, 149, 157, 163–4, 169, 285–308, 361–80, 396 translation tool  48, 67, 76, 286, 342, 363, 373–4 translation trainee  191 translation trainer  144 translator competence  68–70, 179–80, 183–4 translator education  8, 49, 81, 179–82, 184, 187–8, 191, 197, 262 translator training  8–9, 52, 70, 76, 146, 149, 179–80, 182–3, 186–92, 196–7, 267 UCT, see user-centred translation usability  4, 20, 48, 80, 143–4, 146–8, 340, 343, 348, 399–400 user-centred translation  139, 143–4, 146–9 user experience  10, 143, 145, 149, 218, 274, 391 users  4, 11, 104, 108, 116, 118, 128, 131, 139–50, 184, 220, 222, 233, 237–8, 240, 242–3, 246, 270–1, 274, 277, 288–9, 292–4, 297, 299–301, 305, 322, 335, 338–9, 342, 344–5, 347–9, 370, 390, 393, 399–400 visibility  ix, 4, 10, 23, 30, 49, 75, 91, 94, 99, 103, 189, 214, 273 workflow  3–4, 6–7, 12, 18–19, 66, 78–80, 149, 157, 161, 167, 216, 218–19, 245, 274, 287, 290, 292–3, 298, 320, 333, 338, 342–3, 349, 362, 373, 396, 399 workload  40, 49, 292, 299, 364, 377 workplace  6–7, 10–11, 37–54, 73, 76, 78, 80–2, 120–1, 125, 127, 129, 158, 166, 170–1, 173, 189, 194–6, 219, 272, 390–2, 395