Wrting Futures: Collaborative, Algorithmic, Autonomous [1 ed.] 9783030709273, 9783030709280

Provides a future-driven framework for investigating and planning for the social, digital literacy, and civic implicatio

668 39 3MB

English Pages [173] Year 2021

Report DMCA / Copyright

DOWNLOAD FILE

Polecaj historie

Wrting Futures: Collaborative, Algorithmic, Autonomous [1 ed.]
 9783030709273, 9783030709280

Table of contents :
Acknowledgments
Contents
About the Authors
List of Figures
List of Tables
1 Writing Futures Framework
1.1 Introduction
1.2 Integration with Fabric of Digital Life
1.3 Tracing the Future of Writing
1.4 Past Studies, Future Speculation
1.5 The Writing Futures Framework
1.6 Overview of Chapters
References
Intertext—The Future of Writing and Rhetoric: Pitch by Pitch by Scott Sundvall, The University of Memphis
References
2 Collaborative Writing Futures
2.1 How Will Writers Collaborate?
2.1.1 Foundational Scholarship on Collaboration
2.1.2 Socio-technological Construction of Knowledge
2.1.3 Collaborative Workspaces
2.2 What Digital Literacies Will Writers Need to Enable Constructive, Collaborative Work with Nonhuman Agents?
2.3 What Civic Challenges Demand Collaborative, Constructive Social Action Through and with Nonhuman Agents?
References
Intertext—Writing Machines and Rhetoric by Heidi McKee and James Porter, Miami University
References
3 Algorithmic Writing Futures
3.1 How Will Algorithms and AI Inform Writing?
3.1.1 Understanding Algorithms
3.1.2 Platform Studies
3.1.3 Demographics
3.1.4 Algorithmic AI
3.2 What AI Literacies Should We Cultivate for Algorithmic Writing Futures?
3.2.1 Learning Analytics and Learning Management Systems
3.3 How Might AI Help to Recognize, Ameliorate, and Address Global Civic Challenges?
3.3.1 Writing for Ethically Aligned Design, Moving from Principles to Practice
References
Intertext—Recoding Relationships by Jennifer Keating, University of Pittsburgh, and Illah Reza Nourbakhsh, Carnegie Mellon University
References
4 Autonomous Writing Futures
4.1 How Will Writers Work with Autonomous Agents?
4.1.1 The Rise of Virtual Assistants
4.1.2 AI Writing
4.1.3 Creative AI
4.2 How Will Literacy Practices Change with Use of Autonomous Agents?
4.3 What Affordances of Autonomous Agents Lend Themselves to More Ethical, Personal, Professional, Global, and Pedagogical Deployments?
4.3.1 Fairness and Non-discrimination
4.3.2 AI Explainability and Transparency
References
5 Writing Futures: Investigations
5.1 Imagining the Future
5.1.1 Trust and Technological Leadership
5.2 Methods/Methodologies/Approaches for Investigating and Planning for Writing Futures
5.3 Academic, Industry, and Civic Investigations
5.3.1 Academic Realm
5.3.2 Industry Realm
5.3.3 Civic Realm
5.4 Imagining Writing Futures
References
Appendix A: Course Syllabus for a Graduate-Level Course, Writing Futures—Collaborative, Algorithmic, Autonomous
Writing Futures: Collaborative, Algorithmic, Autonomous
Course Description, Outcomes, Framework
Learning Outcomes—Writing Futures Framework
Assignments
Course Schedule and Readings
Module One—Weeks 1 & 2—Writing Futures; Emerging Technologies; Rhetorical Speculations
Module Two—Weeks 3–5—Collaboration/Digital Literacy
Module Three—Weeks 6–8—Algorithms/Artificial Intelligence/Ethics
Module Four—Weeks 9–11—Autonomous Agents/Ethics
Module Five—Weeks 12–13—Investigations
Module Six—Weeks 14–15—Integrating Visions for the Future
Appendix B: Complete List of General Keywords in Writing Futures Collection with Counts

Citation preview

Studies in Computational Intelligence 969

Ann Hill Duin Isabel Pedersen

Writing Futures: Collaborative, Algorithmic, Autonomous

Studies in Computational Intelligence Volume 969

Series Editor Janusz Kacprzyk, Polish Academy of Sciences, Warsaw, Poland

The series “Studies in Computational Intelligence” (SCI) publishes new developments and advances in the various areas of computational intelligence—quickly and with a high quality. The intent is to cover the theory, applications, and design methods of computational intelligence, as embedded in the fields of engineering, computer science, physics and life sciences, as well as the methodologies behind them. The series contains monographs, lecture notes and edited volumes in computational intelligence spanning the areas of neural networks, connectionist systems, genetic algorithms, evolutionary computation, artificial intelligence, cellular automata, selforganizing systems, soft computing, fuzzy systems, and hybrid intelligent systems. Of particular value to both the contributors and the readership are the short publication timeframe and the world-wide distribution, which enable both wide and rapid dissemination of research output. Indexed by SCOPUS, DBLP, WTI Frankfurt eG, zbMATH, SCImago. All books published in the series are submitted for consideration in Web of Science.

More information about this series at http://www.springer.com/series/7092

Ann Hill Duin · Isabel Pedersen

Writing Futures: Collaborative, Algorithmic, Autonomous

Ann Hill Duin Department of Writing Studies University of Minnesota Minneapolis, MN, USA

Isabel Pedersen Faculty of Social Science and Humanities Ontario Tech University Oshawa, ON, Canada

ISSN 1860-949X ISSN 1860-9503 (electronic) Studies in Computational Intelligence ISBN 978-3-030-70927-3 ISBN 978-3-030-70928-0 (eBook) https://doi.org/10.1007/978-3-030-70928-0 © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 This work is subject to copyright. All rights are solely and exclusively licensed by the Publisher, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations, recitation, broadcasting, reproduction on microfilms or in any other physical way, and transmission or information storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar methodology now known or hereafter developed. The use of general descriptive names, registered names, trademarks, service marks, etc. in this publication does not imply, even in the absence of a specific statement, that such names are exempt from the relevant protective laws and regulations and therefore free for general use. The publisher, the authors and the editors are safe to assume that the advice and information in this book are believed to be true and accurate at the date of publication. Neither the publisher nor the authors or the editors give a warranty, expressed or implied, with respect to the material contained herein or for any errors or omissions that may have been made. The publisher remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. This Springer imprint is published by the registered company Springer Nature Switzerland AG The registered company address is: Gewerbestrasse 11, 6330 Cham, Switzerland

Acknowledgments

The development of Writing Futures was never a solitary experience. We were lucky enough to be surrounded (virtually and physically) by colleagues, students, family, and friends who constantly inspired us over the seasons it took to create it. We are grateful for the work of Sharon Caldwell, whose expertise, creativity, and steadfast attention to detail helped us build our archived collection and integrate it with book chapters in truly skillful ways. We owe our gracious thanks to Molly M. Kessler, Lee-Ann Kastman Breuch, and Laura Gurak, who each selflessly helped in very important ways. We thank James Harbeck, with his insightful and fastidious copyedit. Our Springer editors and production team, Thomas Ditzinger, Gowrishankar Ayyasamy, and Sylvia Schneider, were immensely helpful while we wrote and prepared the manuscript. We thank our blind reviewers for their time, energy, and direction on the book. Developer Seth Kaufman from Whirl-i-Gig Inc. made some cliffhanger deadlines for much-needed changes to the Fabric database in light of the book’s design. We are honored for the collaboration of the Intertext authors, Heidi A. McKee and James E. Porter, Jennifer Keating and Illah Nourbakhsh, and Scott Sundvall, who probed and provoked the book’s themes in myriad novel ways. It is stronger with their voices. We extend our thanks to our academic collaborators and friends, whose wisdom, camaraderie, and scholarship exhilarated us, including Jason Tham, Daniel Hocutt, Andrew Iliadis, Tom Everrett, Peter Turk, Tanner Mirrlees, Jayden Cooper, and Jack Adam Narine. We acknowledge the important role of our students: Ann Hill Duin’s graduate class at the University of Minnesota, scholars Mikayla Davis, Sean Golden, Brian Le Lay, Shane Rose, and Rira Zamani, willing to share opinions, raise debates, and question the future landscape of writing studies through their distinct scholarly lenses. The students of Isabel Pedersen’s computer science graduate course at Ontario Tech University, Global AI Ethics brought their own insights to bear on many debated topics surrounding the emergence of AI during the writing of the book. Another group that influenced our thinking and writing was the Building Digital Literacy team, including graduate students Katlynne Davis, Saveena (Chakrika) Veeramoothoo, and Danielle Stambler; scholars and professors Jason Tham, Daniel Hocutt, Jessica Campbell, Stephen Fonash, Laura Gonzales, John Misak, and Nupoor v

vi

Acknowledgments

Ranade, who integrated Fabric of Digital Life in coursework over three phases of an eighteen-month study. These engaging scholars from diverse backgrounds (composition, rhetoric, technical communication, education) met regularly, shared instructional materials, and wrote research papers together. We are glad to have collaborated in this ongoing dialog over the concept of digital literacy. We thank all the futurists who inspire us daily, and we both marvel and shudder at the speed with which their predictions become reality in collaborative, algorithmic, and autonomous writing futures. We thank our families for allowing us the time and space to muse over writing futures and to aspire to create a framework for preparing for writing futures in advance of major technological transformations. We thank each other for a truly remarkable collaborative experience. Last, we acknowledge that this research was undertaken, in part, thanks to funding from the Canada Research Chairs program and from the US Council for Programs in Technical and Scientific Communication.

Contents

1 Writing Futures Framework . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.2 Integration with Fabric of Digital Life . . . . . . . . . . . . . . . . . . . . . . . . . 1.3 Tracing the Future of Writing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.4 Past Studies, Future Speculation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.5 The Writing Futures Framework . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.6 Overview of Chapters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Intertext—The Future of Writing and Rhetoric: Pitch by Pitch by Scott Sundvall, The University of Memphis . . . . . . . . . . . . . . . . . . . . . .

1 1 2 5 10 13 15 20

2 Collaborative Writing Futures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.1 How Will Writers Collaborate? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.1.1 Foundational Scholarship on Collaboration . . . . . . . . . . . . . . 2.1.2 Socio-technological Construction of Knowledge . . . . . . . . . . 2.1.3 Collaborative Workspaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.2 What Digital Literacies Will Writers Need to Enable Constructive, Collaborative Work with Nonhuman Agents? . . . . . . . 2.3 What Civic Challenges Demand Collaborative, Constructive Social Action Through and with Nonhuman Agents? . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Intertext—Writing Machines and Rhetoric by Heidi McKee and James Porter, Miami University . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

27 27 29 30 35

3 Algorithmic Writing Futures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.1 How Will Algorithms and AI Inform Writing? . . . . . . . . . . . . . . . . . . 3.1.1 Understanding Algorithms . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.1.2 Platform Studies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.1.3 Demographics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.1.4 Algorithmic AI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.2 What AI Literacies Should We Cultivate for Algorithmic Writing Futures? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

53 53 54 57 59 60

23

37 41 43 47

62 vii

viii

Contents

3.2.1 Learning Analytics and Learning Management Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.3 How Might AI Help to Recognize, Ameliorate, and Address Global Civic Challenges? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.3.1 Writing for Ethically Aligned Design, Moving from Principles to Practice . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Intertext—Recoding Relationships by Jennifer Keating, University of Pittsburgh, and Illah Reza Nourbakhsh, Carnegie Mellon University . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 Autonomous Writing Futures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.1 How Will Writers Work with Autonomous Agents? . . . . . . . . . . . . . 4.1.1 The Rise of Virtual Assistants . . . . . . . . . . . . . . . . . . . . . . . . . . 4.1.2 AI Writing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.1.3 Creative AI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.2 How Will Literacy Practices Change with Use of Autonomous Agents? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.3 What Affordances of Autonomous Agents Lend Themselves to More Ethical, Personal, Professional, Global, and Pedagogical Deployments? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.3.1 Fairness and Non-discrimination . . . . . . . . . . . . . . . . . . . . . . . 4.3.2 AI Explainability and Transparency . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5 Writing Futures: Investigations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.1 Imagining the Future . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.1.1 Trust and Technological Leadership . . . . . . . . . . . . . . . . . . . . 5.2 Methods/Methodologies/Approaches for Investigating and Planning for Writing Futures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.3 Academic, Industry, and Civic Investigations . . . . . . . . . . . . . . . . . . . 5.3.1 Academic Realm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.3.2 Industry Realm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.3.3 Civic Realm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.4 Imagining Writing Futures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

64 71 73 76

81 85 85 86 91 96 98

100 101 102 103 109 109 112 116 123 124 127 130 134 136

Appendix A: Course Syllabus for a Graduate-Level Course, Writing Futures—Collaborative, Algorithmic, Autonomous . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 141 Appendix B: Complete List of General Keywords in Writing Futures Collection with Counts . . . . . . . . . . . . . . . . . . . . . . . . . 155

About the Authors

Dr. Ann Hill Duin is Professor of Writing Studies and Graduate-Professional Distinguished Teaching Professor at the University of Minnesota where her research and teaching focus on the impact of emerging technologies on digital literacy, analytics, collaboration, and writing futures. She is a founding member of the Emerging Technologies Research Collaboratory at the University of Minnesota. She served 15 years in higher education administrative roles including Vice Provost and Associate Vice President for Information Technology. Her recent scholarship appears in Computers and Composition, Communication Design Quarterly, IEEE Transactions on Professional Communication, Technical Communication Quarterly, Planning in Higher Education, Rhetoric and Professional Communication and Globalization, International Journal of Sociotechnology Knowledge Development, and Connexions: International Professional Communication. Her international collaboration includes research cluster leadership in the Digital Life Institute at Ontario Tech University and ongoing mentorship of global virtual teams as part of Trans-Atlantic Pacific Partnership initiatives. Dr. Isabel Pedersen is Professor of Communication Studies and Canada Research Chair in Digital Life, Media, and Culture at Ontario Tech University in the Faculty of Social Science and Humanities. She is the Founder and Director of the Digital Life Institute, which hosts international research partnerships. Her research concentrates on how societal transformation to very personal, embodied technology is affecting life, meaning-making, ethics, policy, politics, culture, and the arts. She teaches in the graduate program in Computer Science. She is also an Associate of the Joint Graduate Program in Communication and Culture at Ryerson University and York University. She is Co-editor of Embodied Computing: Wearables, Implantables, Embeddables, Ingestibles (Pedersen and Iliadis 2020, MIT Press). She is published in academic journals including the Journal of Information, Communication and Ethics in Society, International Journal of Cultural Studies, and the Journal on Computing and Cultural Heritage.

ix

List of Figures

Fig. 1.1 Fig. 1.2 Fig. 1.3 Fig. 1.4 Fig. 2.1

Fig. 2.2 Fig. 2.3 Fig. 2.4 Fig. 2.5 Fig. 3.1

Fig. 3.2

Fig. 4.1

Fig. 4.2 Fig. 4.3 Fig. 4.4

Fabric of digital life’s timeline feature (Image permission: Isabel Pedersen) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Use of AI Writer to generate text using the prompt, writing futures (Image permission: AI-Writer.com) . . . . . . . . . . . . . . . . . . Genius team in the future (Image permission: iStock.com/Devrimb) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Venn diagram of the Writing Futures framework . . . . . . . . . . . . . . Deploying Glass in a technical communication course. The design of Glass disrupts human–human collaboration (Photo permission: Wearables Research Collaboratory, University of Minnesota) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Yuxi Liu’s video of her exploration through Five Machines (Photo permission: Yuxi Liu) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Collaborative workspace created with the use of Spatial (Photo permission: Spatial Systems, Inc.) . . . . . . . . . . . . . . . . . . . Sophia, a humanoid robot with AI capabilities (Image permission: ITU Pictures, CC by 2.0) . . . . . . . . . . . . . . . . . . . . . . . JISC digital literacy capabilities (Image permission: JISC) . . . . . LMS system use from 1997 to 2019 (Hill, 2020) (Image permission: Creative Commons Attribution 4.0 International License) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A schematic of the CLA toolkit design (Image permission: Creative Commons Attribution-ShareAlike 4.0 International License) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Quantum Capture marketing video of Rachel, Virtual Hotel Concierge application (Photo permission: Quantum Capture) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Functional AI applications related to NLP . . . . . . . . . . . . . . . . . . . Screenshot of AI Writer website (Photo permission: AI-Writer.com) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Pepper from SoftBank Robotics Europe (2020) (CC BY-SA 4.0) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

4 7 9 13

32 33 36 37 39

66

69

89 92 94 100 xi

xii

Fig. 5.1

Fig. 5.2 Fig. 5.3

Fig. 5.4 Fig. A.1

List of Figures

Figure Film still from Oscillator Media’s Automatic on the Road, directed by Lewis Rapkin (2018) with cinematography by David Smoler (Image permission: Lewis Rapkin). https://www.imdb.com/title/tt8240088/ . . . . . . . . Themes across the Writing Futures framework . . . . . . . . . . . . . . . Screen captures of students’ archival process, from (A) initial sorting of entries and metadata to (B & C) composing a front-end “collection” view on the Fabric website (Image permission: Jason Tham) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Powers and Cardello (2018) overview of machine learning methods (Redistribution allowed with attribution) . . . . . . . . . . . . . Venn diagram of the Writing Futures framework . . . . . . . . . . . . . .

113 122

125 128 144

List of Tables

Table 1.1 Table 2.1 Table 3.1 Table 4.1 Table 5.1 Table 5.2

Table 5.3

Table 5.4 Table A.1

Writing Futures framework for investigating and planning for writing futures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Writing Futures framework, collaborative writing futures . . . . . Writing Futures framework, algorithmic writing futures . . . . . . Writing Futures framework, autonomous writing futures . . . . . . Walsh’s (2019) ten core principles for algorithmic leadership . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Methods/methodologies/approaches used in study of collaboration, algorithms, and autonomous agents [references included with respective chapters] . . . . . . . . . . . . . . Categories and subcategories of the Human–AI Collaboration Framework, Partnership on AI (2019) [“Differently-abled” changed to “Disability”] . . . . . . . . . . . . . . . Summary of six algorithms as discussed by Powers and Cardello (2018) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Components of the Writing Futures Framework . . . . . . . . . . . . .

14 28 54 86 115

117

123 129 145

xiii

Chapter 1

Writing Futures Framework

1.1 Introduction The rise in the use of nonhuman agents and artificial intelligence (AI) is disrupting all fields and professions. One of the greatest challenges facing professional and technical communication (PTC) scholars and instructors is a reticence to prepare for writing futures in advance of these major technological transformations. While the world is clamoring to identify the agent or go-between for difficult explanations of speculative technology proposed for society, our field often chooses to wait until after they have been deployed. The ethical dilemma caused by the delay results in ignorance at the point of emergence, which means that stakeholders (and the public in general) often cannot properly assess technologies of great impact at the appropriate time. This book serves as a guide for preparing in advance for major technological transformations. We provide a framework for professional and technical communication scholars and instructors to investigate and plan for the social, digital literacy, and civic implications of collaborative, algorithmic, and autonomous writing futures: • collaborative, including examination of human–human and human–device work critical to writing futures; • algorithmic, including exploration of learning management systems and artificial intelligence and the impact of datafication on writing; and • autonomous, including understanding and deployment of autonomous agents, i.e., technologies capable of operating without direct human control. While our main audience is professional and technical communication scholars and instructors, given the demand for interdisciplinary academic books from fields adjacent to engineering, this book also will be of critical use by scholars across a broad range of disciplines, including composition, communication studies, human–computer interaction, computer science, artificial intelligence and automation studies, and organizational communication. We also have designed this book for © The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 A. H. Duin and I. Pedersen, Writing Futures: Collaborative, Algorithmic, Autonomous, Studies in Computational Intelligence 969, https://doi.org/10.1007/978-3-030-70928-0_1

1

2

1 Writing Futures Framework

use by practitioners across these fields given the critical importance of greater understanding of collaboration, AI, and emerging technologies. Our goal is to provide readers with opportunities to understand and write alongside nonhuman agents, examine the impact of algorithms and AI on writing, accommodate the unique relationships with autonomous agents, and investigate and plan for writing futures.

1.2 Integration with Fabric of Digital Life Unique to this book is its integration with Fabric of Digital Life (https://fabricofd igitallife.com/), a database and structured content repository for conducting social and cultural analysis of emerging technologies and the social practices that surround them. Growing in content since 2013, Fabric of Digital Life provides a public, collaborative research site for analyzing overwhelming technological change and the social implications that arise as a result. Using a human-centric lens, it follows modes of technology invention over time through its corpus of videos, texts, and images (Duin et al., 2018; Iliadis & Pedersen, 2018; Pedersen & DuPont, 2017). Throughout each chapter of this text, readers can access more detail about each technology discussed by examining an associated thematic collection—the Writing Futures: Collaborative, Algorithmic, Autonomous collection—at Fabric of Digital Life. Thematic digital research collections are built to form “a contextual mass model [to] create a system of interrelated sources where different types of materials and different subjects work together to support deep and multifaceted inquiry in an area of research” (Palmer, 2004). Our goal is to display concrete examples of the social, digital literacy, and civic implications of specific technologies in Fabric of Digital Life so that readers can examine these technologies within key social contexts that are constantly evolving. As a research collection, it is open-ended and publicly available. It concentrates on emergent embodied technologies and research on specific platform categories: carryable, wearable, implantable, ingestible, embeddable, and robotical. It provides a metadata structure that guides archivists toward a human-centric orientation to technology emergence. To designate keywords, archivists ask questions from a human subject’s point of view. The starting point is, How are devices physically used? The location-on-the-body keyword system includes a hierarchy of fields related to bodies. At the time of writing, 1298 items relate to the head; subcategories include eye, face, ear, mouth, or brain. This set of keywords enables the tracking of all brain–computer devices over time; for example, 92 items involve wearable inventions in the category brain. As industry jargon changes, a location-on-the-body keyword provides a static category. Fabric archivists also probe social spheres to ask What kinds of activities will humans perform while using technology and what kinds of social practices will it encourage or even replace in future? Archivists query, How will humans be augmented and to what consequence? For example, rather than only archiving the presence of a technology feature (e.g., voice recognition on a smartwatch), the content ontology allows archivists a means to document human

1.2 Integration with Fabric of Digital Life

3

motivation for wearing technologies, such as communicating, informing, surveilling, policing, remembering, or socializing. Items are also included that represent related emergent technologies and nonhuman agents within ecosystems such as ambient interaction, artificial intelligence, smart homes, internet of things, or biotechnologies that often converge or interact with the core embodied technologies. It catalogs fictional portrayals that are often used in science discourse to explain human–computer interaction scenarios. The general keyword field captures broader subject areas such as education, health, or work. However, this architecture provides a means to explore socio-technical assemblages amid the more common frameworks, such as business and engineering sectors. It is also guided by rhetorical studies focus so that we can interpret emergence in persuasive trajectories. Motives might be stated overtly in video representations, or simply implied in video clips through a visual depiction. Fabric of Digital Life provides a way to reveal how technologies sometimes evolve or transform over time at cross-purposes from their original intent. The technology behind a fitness tracker can become a workplace employee tracker or a monitor for children or even a COVID-19 tracker, and the trace implications of those previous contexts are important. The advantage of Fabric of Digital Life for this book is threefold: • First, the Writing Futures: Collaborative, Algorithmic, Autonomous collection provides video examples for technologies we discuss in the book. When we mention a specific technology concept such as a virtual assistant, we point readers to actual video and other artifacts discussed in the book contextualized with examples (e.g., an advertisement that describes Siri, Apple’s virtual assistant; a news broadcast on Google Assistant being used in a professional field). • Second, we identify keywords so that readers can further explore important concepts. The book reenvisions terms such as professional communication, collaboration, and digital literacy, which are matched to the collection’s keyword metadata. Students and professionals can see how we contextualize key ideas with real-world examples and even groupings of concepts. See Appendix B to view the list of general keywords (approx. 300), one of the metadata categories that designates subject areas. • Third, we will continue to update content and metadata to keep this book current after publication. For example, if Human–Robot Interaction (HRI) undergoes significant new social robot innovations emerging after the book’s publication, we will continue to capture these artifacts in the Writing Futures: Collaborative, Algorithmic, Autonomous collection. COVID-19 has led to dramatic calls from states to adapt social, civic, and professional practices. Almost immediately, technological deployments served as solutions across the globe. This situation led us to create a COVID-19 Tech collection in Fabric of Digital Life to chart the unfolding event. We also noticed numerous collaborative work platforms that have become vital for worker interaction. We identified a surge in embodied technologies being used for tracking and policing the outbreak, monitoring the infected, or, in a few cases, predicting its onset. For instance, we include a video

4

1 Writing Futures Framework

that demonstrates a contact tracing app developed by COVID Watch, a nonprofit group of over 100 academics, public health experts, and technologists. Communicating the social practice as well as the technological functionality of contact tracing is contextualized in the collection. Written news, public broadcasts, government press releases, and social media circulation reflect international responses to the pandemic. Collections in Fabric of Digital Life deliberately overlap due to the data ontology to provide a unique lens on interrelated, collocated material. For example, the COVID-19 crisis has revealed the frequency of people using the video chat service Zoom due to calls for mass social distancing. Representative artifacts about it are included in the COVID-19 Tech collection. However, primary artifacts describing Zoom had already been archived in the much larger Writing Futures: Collaborative, Algorithmic, Autonomous collection because the technology involves collaborative professional communication. Video conferences and platforms serve as an important means for international writing teams to communicate. The scholarly result is that the collocation of these two collections revitalizes both of them through the dialogic relationship established across the content. Therefore, our book helps readers respond to rapidly evolving technological and social contexts. Prompt Fabric of Digital Life has a timeline feature (see Fig. 1.1) to let site users view items according to their date of publication or creation. Choose a keyword and explore how it is applied over time.

Fig. 1.1 Fabric of digital life’s timeline feature (Image permission: Isabel Pedersen)

1.3 Tracing the Future of Writing

5

1.3 Tracing the Future of Writing What is the future of writing? In 2011, the University of Minnesota Press published an English translation of Czech philosopher Vilém Flusser’s 1987 Does Writing Have a Future? Flusser theorized about the impact of media on culture and writing and the future of machine automation in ways similar to Jean Baudrillard, Marshall McLuhan, and Paul Virilio of the last century. He predicted artificial intelligence that performs both thinking and decision-making: Writing, in the sense of placing letters and other marks one after another, appears to have little or no future. Information is now more effectively transmitted by codes other than those of written signs. What was once written can now be conveyed more effectively on tapes, records, films, videotapes, videodisks, or computer disks, and a great deal that could not be written until now can be noted down in these new codes. . . . Only historians and other specialists will be obliged to learn reading and writing in the future. . . . One can leave writing, this ordering of signs to machines. I do not mean the sort of machines we already know, for they still require a human being who, by pressing keys arranged on a keyboard, orders textual signs into lines according to rules. I mean grammar machines, artificial intelligences that take care of this order on their own. Such machines fundamentally perform not only a grammatical but also a thinking function. (pp. 3–6)

Such a prediction assumes the replacement of human writing by information codes conveyed by machines. It imagines that writing performed by humans (semiosis) will be erased by the work of “artificial intelligences” that are given the agency to create, inform, remember, and even think. This prediction persists today. Writing can be simply defined as process, as activity, as “the act or process of one who writes” (Merriam-Webster Dictionary). Such writing is a skill, a competency, required to date by all who wish to communicate personally and professionally. This activity of writing “can have knowledge-transforming effects, since it allows humans to externalize their thinking in forms that are easier to reflect on and potentially rework” (Writing, 2020). To date, these general resources assume a “human” writer. Decades ago, while transitioning from an analog to a digital age, composition evolved from focusing on writing as a largely rule- and style-driven enterprise to seeing it as a process. Although tethered to desktops, we broadened our understanding of how and why writers make the choices they do during the writing process, claiming writing to be a nonlinear, goal-driven process that includes planning, translating, embedding, and reviewing. With the seminal work “A Cognitive Process Theory of Writing” (Flower & Hayes, 1981), we paid great attention to the writer’s long-term memory, writing as a process, and influence from the task environment (p. 370). While increasingly amazed by the inner workings of new hard drives and operating systems, we taught writing as an internal process complete with schema theory, mental models, and focus on the internal, and largely individual, mind. Then the earliest of networks and laptops arrived, and we soon found ourselves immersed in a multifunction—unconnected and then sometimes connected—world of wireless computing. With the advent of the internet, our pedagogy evolved from reference to linear activities—planning, drafting, reviewing, editing—conducted

6

1 Writing Futures Framework

within a similar physical space to activities conducted within a cyberspace of online resources and shared documents. We spoke and taught about the social construction of writing in which writers construct new knowledge from their experiences and interactions with discourse communities. Basic assumptions included the need to examine any writing context amid the rapidly changing technologies that were redefining writing, thus framing a socio-technological direction for pedagogy (Duin & Hansen, 1996). As technology and writing evolved, a traditional humanist approach to technology continued to draw a firm line between the human and the machine. However, as Porter (2009) notes, “this approach fails to appreciate the compelling power of virtual life and communication,” and the “more promising approach, articulated by Hayles (1999) and others, is the posthumanist approach to technology” beginning with Haraway’s (1991) notion of the cyborg: “a hybrid metaphor that challenges the human-machine distinctions and questions conventional body boundaries and notions of the writer as purely human. A posthumanist approach [now] explores cyborgian hybridity, the connectedness between human-machine.” According to Porter, “the machines that we use to write and speak are closely merged with our flesh-and-blood bodies” (p. 213); we now recreate our bodies in cyberspace. Hayles advances the theory to include “cognitive assemblages” and explains that “as these devices become smarter, more wired, and more capable of accessing information portals through the web, they bring about neurological changes in the mindbodies of users, forming flexible assemblages that constantly mutate as information is gathered, processed, communicated, stored, and used for additional learning” (Hayles, 2017, p. 119). And most recently, Pedersen (2020) emphasizes that “the idea of a networked body working autonomously through data assemblages seems less futuristic than before” (p. 39). Pedersen’s focus on body networks illustrates how “bodies will participate in cooperative relationships with other human and nonhuman actors and digital infrastructures” (p. 25). The “firm line” has disappeared. So, what are writing futures? If one asks the automated writing bot AI Writer (2020) to generate an article on this topic, as we did (see Fig. 1.2), a bot-generated text will appear that assumes the writer is interested in buying or writing a futures contract: The option author will sell certain rights to the option buyer in the future, while the buyer and seller will not assume any obligations. The callers and put writers grant rights in exchange for a premium that buyers receive in advance. Calls grant buyers the right to buy the underlying forward contract at a fixed price or exercise price.

Clearly this is not the “writing futures” intended for examination in this book. However, for some time now, creative thinkers have been asking bots to examine text, video, audio, and art to then to generate new text, video, audio, and art. Rock songs written by AI bots are regularly ranked and rated (Beaumont, 2020), with some musicians generating full albums with their AI collaborators. Researchers created FlowMachines, capable of learning to mimic a band’s style from its entire database of

1.3 Tracing the Future of Writing

7

Fig. 1.2 Use of AI Writer to generate text using the prompt, writing futures (Image permission: AI-Writer.com)

songs, fed it the complete works of the Beatles and fully AI-generated the “Daddy’s Car” song at this same site. Cizek et al. (2019), in discussing media co-creation with nonhuman systems, note that algorithmically derived art is a long-standing genre: “Roman Verostko and the Algorists were an early-1960s group of visual artists that designed algorithms that generated art, and later, in the 1980s, fractal art. More recently, the Google Deep Dream project reignited the public curiosity about AIgenerated art and its psychedelic reproductions of patterns within patterns” (Human AI collaboration, 2020). Examining how current bots adapt to dataspheres, interact with humans, and accrue writing skills helps us to prefigure writing futures. Microsoft’s Xiaolce chatbot, a Chinese-language conversational AI, converts images into Chinese poetry (Greene, 2018), and novelist Sigal Samuel (2019) describes her use of AI in writing her next novel, noting how use of GPT-2 (Generative Pre-trained Transformer, discussed in Chap. 4) startled her into “seeing things anew” as it “perfectly captured the emotionally and existentially strained tenor of the family’s home” as the novel unfolded in this human–machine collaboration. Writing about OpenAI’s publication of GPT-2, Vincent (2019) explains that GPT-2 is part of “a new breed of text-generation systems that have impressed experts with their ability to generate coherent text from minimal prompts. The system was trained on eight million text documents scraped from the web and responds to text snippets supplied by users. Feed it a fake headline, for example, and it will write a news story; give it the first line of a poem and it’ll supply a whole verse.”

8

1 Writing Futures Framework

Using GPT-2, EssaySoft AI generates academic essays along with “article spinners” to deal with potential plagiarism, with developers contending that “integrating GPT-2 into the education system can eliminate learning demands that the age of instant, digital information has rendered unnecessary and irrelevant.” For use by individuals or teams, Manuscript Writer by SciNote (2020) works to “empower the scientist” by pulling information from the scientist’s data in SciNote, keywords, and DOI numbers of open access references. Manuscript Writer then presents all this in the form of a manuscript, delivering the introduction, materials and methods, results, and references sections from which the writer(s) can begin generating further text. And at the time of finalizing this chapter, articles describing GPT-3 began to proliferate, noting its ability to generate text in response to any input text and respond to questions or statements (Hu, 2020). GPT-3 is pretrained with 45 TB of text, totaling 499 billion words; it costs somewhere between 4.6 and 12 million USD; and it supposedly passes the Turing test—meaning it can fool humans into thinking that it is a human. As reported by Hu, GPT-3 can mimic writing styles of famous people (or anyone else), and it can go one more step to generate computer equations, queries, and applications. Hu writes that “since it is a black box, we cannot easily predict or control the text it generates” and “an unsupervised GPT-3 could generate text that is biased or hurtful.” We discuss GPT-3 in more detail in Chap. 4. As an additional example, creative technologist Chris Duffey (2019b) compiled a tapestry of AI technologies to co-write with Aimé the book Superhuman Innovation (Duffey, 2019a). Duffey explains these three systems: AI voice recognition enables human-to-system interaction through a voice-user interface— more commonly known as a VUI—for tasks such as speech-to-text, text-to-speech, voice editing, formatting, spelling, and document sharing. AI content understanding and summarization reviews and abridges databases, articles, and research papers into digestible content through approaches such as sentiment analysis, labeling, and organization of higher-level concepts based on contextual understanding. And AI content creation and generation allows the system to develop concepts and ideas to aid in writing process by using algorithms to emulate human writing, allowing the AI to contribute ideas, titles, content, and drafts.

Writing as a dialogue between Aimé and Duffey, Superhuman Innovation showcases “how AI can help achieve the seemingly impossible by using technology to solve problems that we couldn’t have imagined solving by ourselves.” Duffey shares this excerpt from Chapter 15, “Next-Gen Creativity: Improving the Human Experience:” Chris: Aimé:

So, what do you think about the role of humanity if the most pressing problems have been resolved by AI? I have a few answers for you. Plato said the purpose of humanity is to obtain knowledge. Friedrich Nietzsche had a different take and said it is to obtain power. Ernest Becker thought the purpose is to escape death and Darwin thought it is to propagate our genes. On the other hand, the nihilists said there is no meaning, and Steven Pickard said the meaning is beyond our cognitive capabilities.

In this same timeframe, Springer Nature (2019) unveiled the first complete research book generated using machine learning (Beta Writer, 2019), Lithium-lon Batteries:

1.3 Tracing the Future of Writing

9

A Machine-Generated Summary of Current Research. In this announcement, Niels Peter Thomas, Managing Director of Books at Springer Nature, shared about this future direction: Springer Nature is aiming at shaping the future of book publishing and reading. New technologies around Natural Language Processing and Artificial Intelligence offer promising opportunities for us to explore the generation of scientific content with the help of algorithms. As a global publisher, it is our responsibility to take potential implications and limitations of machine-generated content into consideration, and to provide a reasonable framework for this new type of content for the future.

As technologist Ross Goodwin, quoted in the book’s introduction, emphasizes, “When we teach computers to write, the computers don’t replace us any more than pianos replace pianists—in a certain way, they become our pens, and we become more than writers. We become writers of writers.” We would add that we become more writers with writers (See Fig. 1.3). Also less “assistant” and more “collaborator,” consider Samsung’s recent humanoid chatbots known as Neons (Matyus, 2020). Here the user comes face to face with “‘artificial humans’” that “are supposed to be more of a reflection of humans,” and as Neon CEO Pranav Mistry states, “there are millions of species on our planet, and we hope to add one more. . . . Neons will be our friends, collaborators, and companions, continually learning, evolving, and forming memories from their interactions.” Powered by CORE R3 technology (which stands for Reality, Realtime, and

Fig. 1.3 Genius team in the future (Image permission: iStock.com/Devrimb)

10

1 Writing Futures Framework

Responsive), these humanoids can “connect and learn more about us, gain new skills, and evolve.” Although they are currently seen mainly as “friendly customer service,” it is now imperative to consider and plan for writing futures that include AI writing assistants and AI collaborators. Such a future is based on a dialogic approach toward collaboration that requires closer attention to emergent socio-technical assemblages that contextualize writing practices. Socio-technical assemblages automate aspects of creative production; they also increasingly will evolve from assistant to collaborator, from machine autonomy to human–machine cooperation and collaboration, from assemblages to collaboratives, from assistantship to synergy. If one considers how AI solutions become ubiquitous in private and professional domains, soon, as independent information architect Hafez (2020) writes, “humans using AI will be relying on recommendations and actions from multiple smart machines to coordinate and manage their financial, professional, or health [and writing] objectives” (p. 981). Hafez introduces the concept of a human digital twin (HDT), a “human-specific smart machine dedicated to aligning human objectives with the smart machines supporting her” (p. 981). An HDT monitors a person’s human–AI space and, based on the person’s responses, works to ensure that the many systems supporting the person are in alignment. Consider here how an HDT might monitor your writing and communication across multiple contexts. Multi-domain scenarios in which we collaborate with multiple systems are increasingly integral parts of our lives and organizations. How might an HDT be an “active, human-specific and adaptive alignment” between these many contexts and our writing goals? While an HDT system might strike both excitement and terror, Writing Futures as an organizing concept places writing and technological evolution amid the most complex, multifaceted problems that face professional and technical communication, its many related disciplines and industries, and our local and global communities. Writing Futures connects emerging technologies with sociality, digital literacy, and civic engagement. Writing is itself a form of technology, and as the above examples indicate, writing is increasingly collaborative, algorithmic, and autonomous.

1.4 Past Studies, Future Speculation Collections on the future of writing mainly consider the impact of online and digital technologies on publishing, journalism, and creative writing (Potts, 2014) and “explore modes of critical speculation into the transformative impact of emerging technologies,” positioning rhetoric and writing scholars “as proprietors of our technological future to come” (Sundvall & Weakland, 2019, p. 4). Like Potts, we agree that “nobody can predict the future with confidence and accuracy,” as “the present is already bewildering enough, characterized by rapid technological development and disruptive upheavals,” and “none of the old certainties—political, corporate, and economic—seems to hold.” The future—including the future of writing—can indeed “be thrown into doubt” (p. 6). We agree with Sundvall and Weakland’s conceptual

1.4 Past Studies, Future Speculation

11

aim, “that the future ever arrives too soon,” and that while “we cannot keep apace with the rapid technological development,” we “must work with such a technological problematic” (pp. 5–6). Contributors to Potts’s collection focus on technological effects within the publishing industry, including the need for greater curation in determining searching, sorting, and ultimately reading; on possibilities that new technologies provide for creative expression; and on the impact of social media and “a tsunami of snapshots, alerts, briefs, tweets, shorts, summaries, and posts” as roles blur between newsmakers and news breakers. As a means to “work with such a technological problematic,” contributors to Sundvall’s (2019) collection employ the method of speculative modeling, a strategy for “anticipatory, futural thinking . . . especially with regard to emergent technologies,” specifically, “thinking proactively, futurally about, and in anticipation of, how rhetoric and writing might appropriate emergent technologies before they have already after-the-fact arrived” (p. 6). This model of speculative thought comes from sci-fi and speculative fiction and is used to “proactively and speculatively invent” the future. Such radical speculation makes for engaged, in-depth scholarship. Contributors attend to “how emerging technologies can refashion our rhetorical, ethical, and affective conception of embodiment” (p. 11), providing a “futural blueprint for rhetoric, writing, and the mind” (p. 12). Shared concepts from these contributors include the acknowledgment that “communication and rhetorical actors extend beyond the human mark,” that new media technologies are changing such that we need entirely new methods, and that creativity and invention will play a central role in the future of writing. We commend the goal to “proactively (re)invent the future, appropriating and employing emergent technologies in the service of the future we desire to inhabit” (pp. 18–19). Building on these collections, our goal in this book is to provide a dynamic, usable framework for investigating and planning for writing futures. The Writing Futures framework in this book includes recognition of the need for understanding, chronicling, and critiquing technological emergence. Rotolo, Hicks, and Martin (2015) define an emerging technology as a relatively fast growing and radically novel technology characterised by a certain degree of coherence persisting over time and with the potential to exert a considerable impact on the socio-economic domain(s) which is observed in terms of the composition of actors, institutions and the patterns of interactions among those, along with the associated knowledge production processes. Its most prominent impact, however, lies in the future and so in the emergence phase is still somewhat uncertain and ambiguous. (p. 1840)

Rotolo et al. document the lack of consensus on what constitutes technological emergence and, on the basis of their thorough review of studies, they identify five attributes in the emergence of novel technologies: “radical novelty, relatively fast growth, coherence, prominent impact, and uncertainty and ambiguity” (p. 1827). They note that in early phases of emergence some technologies acquire a certain momentum to become “emerging” while other technologies arrive at the verge of becoming emergent, but then do not actually emerge at all, emphasizing that “we have limited

12

1 Writing Futures Framework

knowledge of the end point of the emergence process, i.e. when emergence is over, or perhaps prematurely grinds to a halt or reverses” (p. 1840). In addition, funding may lead to relatively fast growth, and publication download statistics, numbers of tweets, and blog citations provide an early indication of potential emergence. Here we emphasize that our integration of artifacts from Fabric of Digital Life positions readers to understand, contextualize, and chronicle technological emergence within cultural spheres as it relates to collaborative, algorithmic, and autonomous writing futures. This book also incorporates study from our individual and collective research on emerging technologies. For example, in terms of emergence, Duin et al. (2016) deployed and studied use of the Google Glass device across composition and technical communication courses, exploring new dimensions of presence, audience analysis and usability, multimodal composing, and peer review. While they identified social and technical challenges with the Glass device, they also found that students envisioned citizen-engaged uses for it. In many ways, these deployments inform understanding of the rhetoric of wearables, affordances of technology, and critical analysis of technological adoption and societal change. Likewise, Pedersen has developed and applied a critical rhetorical framework called the “continuum of embodiment” to explore the ideological justifications for designing and introducing computing platforms that are increasingly embodied (Pedersen, 2013; Pedersen & Iliadis, 2020). Framing the phenomenon as a continuum provides a means to chart “how public, academic, journalistic, fictional, and commercialized discourses valorize prerelease personal technology on a continuum linking mobile to wearable to implantable innovations as a seemingly necessary, imminent, and determined future” (Pedersen, 2020, p. 22). The cultural momentum for more integrated, seamless, and connected computing experiences will continue to evolve. Integration and automation are rapidly taking shape as devices that are “topographical (on the body), visceral (in the body), and ambient (around the body)” combine to form embodied ecosystems (p. 23). Like Duin, Pedersen explores motivations for adopting technologies within developing socio-technical assemblages and the consequences applicable to myriad applied domains. Therefore, as we focus on investigating and planning for writing futures, we acknowledge that technological emergence is subject to the exigencies of global issues and ecosystems that constantly alter our disciplines, our industries, and their associated writing practices. Chief among exigencies is COVID-19, a powerful global influencer. Of higher education, Baer and Duin (2020) write, “With huge numbers of deaths, economic systems in freefall, and political philosophies dominating decisionmaking, the critical question is one of survival. Higher education must develop a bold set of plans to guide decisions” (p. 2). In addition, corporate actors of “Silicon Valley start-ups and EdTech companies see themselves as part of the push to teach more students with more teaching machines” (Mirrlees & Alvi, 2019, p. 84). Mirrlees and Alvi make the point that while post-internet for-profit education technology companies strive for automating education, it is still people’s choices that control the future, thereby rejecting outright determinism. We too reject outright determinism,

1.4 Past Studies, Future Speculation

13

and include focus on ethical dimensions throughout later chapters. We draw on the prominent AI Now Institute (2020), which has a mandate to focus on rights and liberties, labor and automation, bias and inclusion, and safety and critical infrastructure, to understand the social implications of artificial intelligence. Our take on civic engagement involves participating in design, revision, and amelioration.

1.5 The Writing Futures Framework While past studies and future speculation is important, professional and technical communicators—and the many related disciplines and industries—require a dynamic, usable framework for investigating and planning for writing futures. Figure 1.4 provides an overview of this framework, and Table 1.1 includes detail on each component. Academic, disciplinary, industry, and practitioner directions for writing futures depend on addressing critical questions. We contend that investigating and planning for writing futures begins with a state of mind. To use this framework, each of us, along with our associated disciplines, must do the following: 1.

Abandon nostalgic notions of solo proprietary authorship. Embrace writing as dialogic, sociotechnological construction of knowledge. The core guiding principle is collaboration as one works with human and nonhuman collaborators.

Fig. 1.4 Venn diagram of the Writing Futures framework

14

1 Writing Futures Framework

Table 1.1 Writing Futures framework for investigating and planning for writing futures Literacy

Civic engagement

Collaborative writing How will writers futures (students/colleagues) collaborate with nonhuman agents? Sociotechnological construction of knowledge; Technological embodiment; Nonhuman collaborators; Dialogic collaboration

Social

What literacies will writers need to enable constructive, collaborative work with nonhuman agents? Digital literacy capabilities

What civic challenges demand collaborative, constructive social action through and with nonhuman agents? Risks and benefits of machines as teammates; Identifying and instilling civic dimensions across work, assignments, and tools

Algorithmic writing futures

How will algorithms & AI inform writing? Ambient intelligence; Platform studies; Demographics; Algorithmic AI; Machine learning; Virtual assistants

What AI literacies should we cultivate for algorithmic writing futures? Academic analytics; Learning management systems; AI literacy

How might AI help to recognize, ameliorate, and address global civic challenges? Harvard’s Principled Artificial Intelligence project as a heuristic; Writing for ethically aligned design

Autonomous writing futures

How will writers work with autonomous agents? Social robots; Cognitive assemblages; Digital assistant platforms; Cloud-based AI; Chatbots; Brain–computer interaction; Natural language generation

How will writers contextualize future uses of digital assistant platforms throughout writing? How will literacy practices change with the use of autonomous agents? Literacy for teaching AI assistants and learning from them

What affordances of autonomous agents lend themselves to more ethical, personal, professional, global, and pedagogical deployments? Nondiscrimination; AI transparency; Values and characteristics

2.

3.

Focus on enabling constructive, collaborative social action to foster writing futures that address grand challenges. Attend to algorithms and artificial intelligence to augment, create, and navigate volumes of information. Cultivate ambient intelligence to coordinate collection of data as machine intelligence complements human agency to contribute to learning. Enable and engage with autonomous agents and intelligent systems, including AI virtual assistants, deploying robots to the front lines of mediated learning, building bonds and trust with nonhuman agents, and learning from and with them. Evolve and regenerate writing futures through new forms of collective intelligence.

1.5 The Writing Futures Framework

15

Chapters 2, 3, and 4 each focus on one row of the above table, providing readers with a future-driven framework for investigating and planning for the social, literacy, and civic engagement implications of collaborative, algorithmic, and autonomous writing futures. Chapter 5 then presents past, current, and proposed investigations of this framework in academic, industry, and civic contexts. Along with timely artifacts and collections from Fabric of Digital Life, each chapter includes prompts for readers to examine as a means to experience collaboration with nonhuman agents, use and impact of algorithms and AI, and the affordances of autonomous agents. For example, while it was available, use of Talkto Transformer might have been such a prompt. This tool allowed the user to type a custom prompt for use of the language model called GPT-2 by OpenAI, which then generated paragraphs for possible use. We used the preceding paragraph as a custom prompt, and TalktoTransformer returned the following subsequent text: The chapters each turn to a different spatialization for the horizontal grid, drawing upon the philosophical analysis of spatial thinking to bring them together. These chapters aim to build upon the foundational thinking that lead my group to produce a collaboration-based virtual machine-based virtual reality model of the grid and its new spatial configurations.

While not necessarily what we would write next, this generated text prompted us to consider how we might best draw on “the philosophical analysis of spatial thinking” to bring together the collaborative, algorithmic, and autonomous agents of writing futures. It also prompted us to consider producing “a collaboration-based virtual machine-based virtual reality model” of potential use in investigating and planning for writing futures. However, soon after using this tool, we found that it no longer exists. This site now directs users to “state-of-the-art text generation” at InferKit. This example of the dynamic nature of emerging technology underscores the importance of our use of the Fabric archive, as it chronicles the emergence and trajectory of immersive technologies. Thus, throughout the text, prompts lead readers to artifacts at Fabric that display text, videos, and links that remain accessible, readable, and usable. Most critical to this work is the larger community that investigates and plans for the social, digital literacy, and civic implications of collaborative, algorithmic, and autonomous writing futures. Therefore, positioned at the end of chapters are several intertexts that introduce readers to international scholars, practitioners, institutes, and teams engaged in the study of collaborative, algorithmic, and autonomous writing futures. These scholars and practitioners provide reflections on use of this framework to depict what they determine as most critical to writing futures and how they are or are planning to investigate and prepare for such futures.

1.6 Overview of Chapters We complete this introduction with an overview of the remaining chapters. We have designed chapters for both linear and nonlinear uses; in other words, readers

16

1 Writing Futures Framework

may choose to follow the chapter order (collaborative, algorithmic, autonomous, use cases), follow a theme (social, literacy, civic), or read on the basis of professional plans for investigating and planning writing futures, for example reading the introduction and then focusing on the digital literacies associated with writing alongside nonhuman agents. Chapter 2 examines collaborative writing futures, with focus on technological embodiment, writing alongside nonhuman agents, the digital literacies required as a result, and civic challenges that demand collaboration with nonhuman agents. We begin with an overview of scholarship on collaboration and then guide readers in examining the effect of writing collaboratively with nonhuman actors on the traditional rhetor–audience relationship, with emphasis on technological embodiment and the continued emergence of collaborative workspaces. Expanded literacies are needed to cultivate and enable constructive, collaborative work with nonhuman agents. We draw on the Joint Information Systems Committee (JISC) Digital Capability Framework (2019), developed in the UK, as particularly influential to understand digital literacy. Through an extensive review of articles, reports, frameworks, specifications, and standards as well as interviews, JISC leadership identified key issues in framing how to deepen digital know-how, defining digital literacies as “the capabilities which fit someone for living, learning and working in a digital society.” In this framework, digital literacy capabilities include ICT proficiency; data and media literacies; digital creation, problem–solving, and innovation; digital communication, collaboration, and participation; digital learning and development; and digital identity and well-being. Given the need to expand beyond sets of capabilities and move toward greater understanding of engagement, we unpack the European Union’s DigEuLit project definition of digital literacy provided by Martin and Grudziecki (2006) in terms of emerging collaboration with nonhuman agents. Martin and Grudziecki’s definition of digital literacy is the following: Digital Literacy is the awareness, attitude and ability of individuals to appropriately use digital tools and facilities to identify, access, manage, integrate, evaluate, analyse and synthesize digital resources, construct new knowledge, create media expressions, and communicate with others, in the context of specific life situations, in order to enable constructive social action; and to reflect upon this process. (p. 255)

We conclude with focus on civic challenges that demand collaborative, constructive, social action through and with nonhuman agents. Here we help readers to identify and instill civic dimensions across their work, assignments, and tool use. For example, at the time of writing this text introduction, Duin (in Minneapolis, Minnesota, USA) and Pedersen (in Toronto, Ontario, Canada) faced the continued challenges of justice as Minneapolis witnessed horrific displays of unprovoked violence and police brutality against the Black community, shedding light on the very real conditions of structural racism that built the US and continue to traumatize so many. We acknowledge the significance of the publication of the CCCC Black Technical and Professional Communication Position Statement with Resource Guide, produced by a coalition of Black scholars to help the field instigate productive transformation (McKoy et al., 2020). This document helps to inform book sections on civic

1.6 Overview of Chapters

17

engagement, which is a central part of our framework. McKoy et al. define Black technical and professional communication (TPC) “as including practices centered on Black community and culture and on rhetorical practices inherent in Black lived experience. Black TPC reflects the cultural, economic, social, and political experiences of Black people across the Diaspora” (p. 1). While writing this book, and in response to McKoy et al. (2020), we created the Race, Algorithmic Bias, and Artificial Intelligence: Expert Talks by Researchers and Artists collection to amplify the work and voices of Black scholars who are leading research on AI and algorithmic bias. Its first iteration includes talks given by experts Rediet Abebe, Ruha Benjamin, Joy Buolamwini, Timnit Gebru, Safiya Noble, and Karen Palmer. As professional and technical communication scholars, we believe that we have tremendous potential to incite change—real rhetorical and material change—across our classrooms and communities, and that constructive social action might be deployed through and with nonhuman agents operating through devices. Chapter 3 attends to algorithmic writing futures, with specific focus on analytics and artificial intelligence. We examine the impact on writing futures in terms of algorithmic control and algorithmic culture and how our teaching, writing, cognition, and behavior are being steered by learning management systems. As in Chap. 1, we continue to address automated writing, and we expand on this through discussion of platforms and ambient workspaces. Here we expand on those AI literacies to cultivate as a means to better infer, predict, locate, and assist others. In this early phase of AI deployments, it has been made widely evident that machine learning algorithms can be biased and can cause biased professional practices. The need for transparency for proprietary “black box” decision-making systems has been identified by scholars, governments, and citizen advocates as necessary. The PTC community will often be on the front line tasked with recognizing, reporting, and/or ameliorating bias in teams with developers, AI nonhuman actors, and an array of public or private entities that are starting to classify these types of issues. This will form another analytical skill in the collaborative relationship between writers, adjacent professionals, and AI actors. We conclude with detail on how AI might assist in recognizing, ameliorating, and addressing civic challenges. Here we include content about several relevant international endeavors to establish principles, practices, and standards for ethical AI, many of which reference communication practices. We draw on the IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems, a large international consortium that has authored the first edition of Ethically Aligned Design (2019), a key reference for the work of technologists, educators, and policymakers led by Executive Director John C. Havens. We also deploy key themes from Harvard University’s Principled Artificial Intelligence Project (Fjeld & Nagy, 2020) to create a framework for our approach to ethical AI and writing practices. Chapter 4 concentrates on autonomous writing futures, with focus on emergent socio-technical assemblages that are contextualizing writing practices with nonhuman agents. We add detail and depth to how digital assistants, chatbots, social robots, and AI “writers” (e.g., InferKit) are not only modifying and supporting

18

1 Writing Futures Framework

writing, but automating and composing content. While the emphasis is often on the technological advancements or business model, little has been dedicated to using digital assistants for creative output in professional scenarios. One study proposes that AI machines will be positioned as machine teammates with “a high level of autonomy, based on superior knowledge processing capabilities, sensing capabilities, and natural language interaction with humans” (Seeber et al., 2020). Artificial intelligence (AI) and natural language generation (NLG) platforms are advertised as having the ability not only to automate writing, but also to serve as editors of the content (United Robots, 2020). We emphasize how professional writing and publishing firms increasingly require the use of automated writing. We focus on how human writers will collaborate with AI writers through NLG applications and offer insights into how to maintain a cooperative relationship with them. A recent news article makes the argument that the AI ethical principle of human control should function not only as a civic value system, but as a design goal for AI automation (Markoff, 2020). It suggests that AI automation should involve cooperative scenarios among humans and machines rather than accepting machine autonomy or systems that run without human intervention or even human understanding of the algorithms. We define relevant terminology and introduce key conceptual issues and theory, but also point to future trends that are being discussed in the commercial discourses surrounding automated writing and publishing. We then guide readers in examining the affordances and trade-offs that these transformative relationships engender, forecasting how literacy practices will change with the use of nonhuman agents. Consequently, digital literacy practices adapt to AI literacy, which accommodates the unique relationships under emergence. At the heart of this framework lies the issue of changing roles. In December 2019, the Ministry of Economic Affairs and Employment of Finland launched a basic Elements of AI course for citizens. Its mandate is to extend AI literacy across Europe: “We want to equip EU citizens with digital skills for the future.” Taking a pan-European approach, the website describes the “ambitious goal to educate 1% of European citizens by 2021.” In parallel with previous chapters, we again conclude by expanding on the civic theme woven throughout this book. We began by noting the ethical dilemma of ignorance at the point of technological emergence. Here we emphasize that ignorance at the point of emergence extends beyond classrooms to affect citizens. One of the challenges citizens face is that much of the context—algorithmic platforms—operates through black box dynamics. Digital life and work increasingly adapt to what Weiser called in his famous 1991 Scientific American article “invisible computing,” in which he anticipated an “embodied virtuality” whereby computers become absorbed by places, spaces, and bodies to make life easier and more seamless. At the same time, the social implications of artificial intelligence and algorithmic decision-making on citizens have revealed structural bias in AI, a problem that professional and technical communicators will be asked to address in regular work practices. Power asymmetries embedded in some of these systems have created a stark racial divide in their deployments. Therefore, we note that enabling collaborative social action to foster writing futures involves a civic value system. Civic engagement means developing

1.6 Overview of Chapters

19

a “combination of knowledge, skills, values and motivation to make that difference. It means promoting the quality of life in a community, through both political and non-political processes” (Ehrlich, 2000, p. vi). Chapter 5 concludes the book with methodologies and methods for use in investigation of this framework in academic, industry, and civic contexts: Academic Imagine you’re a professor in a post-COVID world that has undergone sweeping economic and social changes—changes that have profoundly affected the nature of higher education itself. Among the realities you must navigate are new technologies—course analytics, redesigned learning management systems, assistive technologies based in artificial intelligence (AI)—many of which even a few years ago were just being developed, some of which are just now being tested. As these technologies become a part of our writing futures, how might we position communication and composition for ongoing engagement with and critique of technological emergence? As scholar-instructors, how might we work to build and study student digital literacy as part of our teaching in such a new and evolving world? Industry Imagine you’re a professional and technical communicator (PTC) working remotely amid constant changes that have profoundly affected the nature of your local and global work. Collaboration is driven by priorities based on increased reliance on AI articulation of what is most critical for revenue streams. User-experience study increasingly relies on machine learning to test hypotheses and assumptions and understand more about users. Curating evidence and substantiating marketing claims involves constant scraping of data sets to become literate and determine strategic business direction. As practitioners, how do we deploy collaborative, algorithmic, and autonomous technologies to build social, literacy, and civic engagement that meets strategic business needs? Civic Imagine you’re a civic leader responding to urgent community needs. Traditionally, you have brought together resources and services, providing these in close contact environments. Given a pandemic, you’re faced with challenges that demand collaborative, constructive social action through and with nonhuman agents. Large service components along with writing and communication with constituents must be remote. How might AI and machine learning tools support you? How might robots and digital-assistant platforms assist with services and communication?

We also include two appendices: a Course Syllabus for a graduate-level course of this same title as this book, and a complete List of General Keywords in the Writing Futures collection at Fabric of Digital Life. In short, Writing Futures is about positioning scholars, instructors, and practitioners to plan for rapidly evolving technological and social contexts. Through reading chapters and intertexts and responding to prompts—associated artifacts and collections at Fabric of Digital Life and other sites—our goal is for readers to understand, articulate, and be prepared to deploy a framework for investigating and planning for writing futures that includes attending to the social, literacy, and civic implications of collaboration, algorithms, and autonomous agents. We intend to position readers to write alongside nonhuman agents, understand the impact of algorithms and AI on writing, accommodate the unique relationships with autonomous agents, and investigate and plan for their writing futures.

20

1 Writing Futures Framework

References AI Now Institute. (2020). https://ainowinstitute.org/. AI Writer. (2020). Generate unique text with the AI article writer. http://ai-writer.com/. Baer, L. L., & Duin, A. H. (2020). ‘Smart change’ for turbulent times: Planning for survival requires speed, flexibility, and shared leadership. Planning in Higher Education, 48(3). https://www.scup. org/resource/smart-change-for-turbulent-times/. Beaumont, M. (2020, June 19). All the rock songs written by AI bots—ranked and rated in order of . . . greatness? NME. https://www.nme.com/features/rock-songs-written-by-ai-bots-nirvana-met allica-the-beatles-2691875; includes The Beatles, Daddy’s Car, created by Sony CSL Research Lab. Beta Writer. (2019). Lithium-ion Batteries: A machine-generated summary of current research. Springer Nature Switzerland. https://link.springer.com/book/10.1007/978-3-030-16800-1#aut horsandaffiliationsbook. Cizek, K., Uricchio, W., & Wolozin, S. (2019, June 3). Part 6: Media co-creation with nonhuman systems: Collective Wisdom. https://wip.mitpress.mit.edu/pub/collective-wisdom-part-6/ release/1. Duffey, C. (2019a). Superhuman innovation: Transforming business with artificial intelligence. Kogan Page. Duffey, C. (2019b, April 12). The future of writing. Publishers Weekly. https://www.publisherswe ekly.com/pw/by-topic/columns-and-blogs/soapbox/article/79783-the-future-of-writing.html. Duin, A. H., & Hansen, C. J. (1996). Setting a sociotechnological agenda in nonacademic writing. In A. H. Duin & C. J. Hansen (Eds.), Nonacademic writing: Social theory and technology (pp. 1–15). Lawrence Erlbaum Associates. Duin, A. H., Moses, J., McGrath, M., & Tham, J. (2016). Wearable computing, wearable composing: New dimensions in composition pedagogy. Computers and Composition Online. http://cconlinej ournal.org/wearable/. Duin, A. H., Willow, D., Abel, J., Doering, A., Dunne, L., & Isaka, M. (2018, July 22–25). Exploring the future of wearables and embodied computing: A report on interdisciplinary collaboration [Presentation]. IEEE International Professional Communication Conference, Toronto, Canada. https://ieeexplore.ieee.org/document/8476825. Ehrlich, T. (Ed.). (2000). Civic responsibility and higher education. Oryx Press. Fabric of Digital Life. (2013–). https://fabricofdigitallife.com/. Collections noted in this chapter include the Writing Futures: Collaborative, Algorithmic, Autonomous collection, see https://fabricofdigitallife.com/index.php/Browse/objects/removeCriterion/collection_f acet/removeID/47/view/images/key/33b95636e96a6999f8c6d5ee9b357ebc; and the COVID-19 collection, see https://fabricofdigitallife.com/index.php/Browse/objects/facet/collection_facet/ id/48. Fjeld, J., & Nagy, A. (2020, January 15). Principled artificial intelligence: Mapping consensus in ethical and rights-based approaches to principles for AI. Berkman Klein Center for Internet & Society at Harvard University. https://cyber.harvard.edu/publication/2020/principled-ai. Flower, L., & Hayes, J. R. (1981). A cognitive process theory of writing. College Composition and Communication, 32(4), 365–387. Flusser, V. (1987 [2011]). Does writing have a future? University of Minnesota Press. Greene, T. (2018, August 10). Microsoft’s AI can convert images into Chinese poetry. TNW. https://thenextweb.com/artificial-intelligence/2018/08/10/microsofts-ai-can-convert-ima ges-into-chinese-poetry/. Hafez, W. (2020). Human digital twin: Enabling human-multi smart machines collaboration. In Y. Bi, R. Bhatia, & S. Kapoor (Eds.), Intelligent systems and applications: Proceedings of the 2019 Intelligent Systems Conference (IntelliSys) (Vol. 2, pp. 981–993). Springer Nature Switzerland. https://link.springer.com/book/10.1007/978-3-030-29513-4.

References

21

Haraway, D. (1991). A cyborg manifesto: Science, technology, and socialist-feminism in the late twentieth century. In Simians, cyborgs and women: The reinvention of nature (pp. 149–181). Routledge. Hayles, N. K. (1999). How we became posthuman: Virtual bodies in cybernetics, literature and informatics. University of Chicago Press. Hayles, N. K. (2017). Unthought: The power of the cognitive nonconscious. University of Chicago Press. Hu, V. (2020, August 12). The first wave of GPT-3 enabled applications offer a preview of our AI future. InfoQ. https://www.infoq.com/articles/gpt3-enabled-applications/. Human AI collaboration. (2020). Deep dream generator. https://deepdreamgenerator.com/. IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems. (2019). Ethically aligned design 1st edition: A vision for prioritizing human well-being with autonomous and intelligent systems. https://standards.ieee.org/content/dam/ieee-standards/standards/web/documents/ other/ead1e.pdf; https://standards.ieee.org/industry-connections/ec/autonomous-systems.html. Iliadis, A., & Pedersen, I. (2018). The fabric of digital life: Uncovering sociotechnical tradeoffs in embodied computing through metadata. The fabric of digital life: Uncovering sociotechnical tradeoffs in embodied computing through metadata. Journal of Information, Communication and Ethics in Society, 16(3). https://www.researchgate.net/publication/327412909_The_fabric_ of_digital_life_Uncovering_sociotechnical_tradeoffs_in_embodied_computing_through_meta data. JISC. (2019). Building digital capability. https://www.jisc.ac.uk/rd/projects/building-digital-capabi lity. Markoff, J. (2020, May 21). A case for cooperation between machines and humans. New York Times. https://www.nytimes.com/2020/05/21/technology/ben-shneiderman-automation-humans.html. Martin, A., & Grudziecki, J. (2006). DigEuLit: Concepts and tools for digital literacy development. Innovation in Teaching and Learning in Information and Computer Sciences, 4, 249–267. https:// www.tandfonline.com/doi/full/10.11120/ital.2006.05040249. Matyus, A. (2020). Need a friend? Samsung’s new humanoid chatbots known as Neons can show emotions. https://www.digitaltrends.com/news/samsung-humanoid-chatbots-ces-2020-neons/. Mckoy, T., Shelton, C. D., Sackey, D., Jones, N. N., Haywood, C., Wourman, J., Harper, K. C. (2020, September). CCCC Black Technical and Professional Communication Position Statement with Resource Guide. Conference on College Composition and Communication 2020, National Council of Teachers of English. https://cccc.ncte.org/cccc/black-technical-professional-commun ication. Ministry of Economic Affairs and Employment, Finland. (2019, December 10). Finland to invest in the future skills of Europeans—training one per cent of EU citizens in the basics of AI [Press release]. https://eu2019.fi/en/-/suomen-eu-puheenjohtajuuden-aloite-suomi-investoi-eurooppal aisten-tulevaisuustaitoihin-tavoitteena-kouluttaa-prosentti-eu-kansalaisista-tekoalyn-perus. Mirrlees, T., & Alvi, S. (2019). EdTech Inc.: Selling, automating and globalizing higher education in the digital age. Routledge. Palmer, C. L. (2004). Thematic research collections. In S. Schreibman, R. Siemens, & J. Unsworth (Eds.), A companion to digital humanities. Blackwell Publishing. http://www.digitalhumanities. org/companion/. Pedersen, I. (2013). Ready to wear: A rhetoric of wearable computers and reality-shifting media. Parlor Press. Pedersen, I. (2020). Will the body become a platform? Body networks, datafied bodies, and AI futures. In I. Pedersen & A. Iliadis (Eds.), Embodied computing: Wearables, implantables, embeddables, ingestibles (pp. 21–48). MIT Press. Pedersen, I., & Dupont, Q. (2017). Tracking the telepathic sublime as a phenomenon in a digital humanities archive. Digital Humanities Quarterly, 11(4). Pedersen, I., & Iliadis, A. (Eds.). (2020). Embodied computing: Wearables, implantables, embeddables, ingestibles. MIT Press. Porter, J. (2009). Recovering delivery for digital rhetoric. Computers and Composition, 26, 207–224.

22

1 Writing Futures Framework

Potts, J. (Ed.) (2014). The future of writing. Palgrave Macmillan. Rotolo, D., Hicks, D., & Martin, B. R. (2015). What is an emerging technology? Research Policy, 44, 1827–1843. SciNote. (2020). Manuscript writer by SciNote. https://www.scinote.net/manuscript-writer/. Seeber, I., Bittner, E., Briggs, R. O., de Vreede, T., de Vreede, G.-J., Elkins, A., Maier, R., Merz, A. B., Oeste-Reiß, S., Randrup, N., Schwabe, G., & Söllner, M. (2020). Machines as teammates: A research agenda on AI in team collaboration, Information & Management, 57(2). https://doi. org/10.1016/j.im.2019.103174. Samuel, S. (2019, August 30). How I’m using AI to write my next novel. Vox. https://www.vox. com/future-perfect/2019/8/30/20840194/ai-art-fiction-writing-language-gpt-2. Springer Nature. (2019). Springer Nature publishes its first machine-generated book. https://group. springernature.com/gp/group/media/press-releases/springer-nature-machine-generated-book/ 16590134. Sundvall, S. (Ed.). (2019). Rhetorical speculations: The future of rhetoric, writing, and technology. University Press of Colorado, Utah State University Press. Sundvall, S., & Weakland, J. (2019). Introduction. In S. Sundvall (Ed.), Rhetorical speculations: The future of rhetoric, writing, and technology. Rhetorical speculations: The future of rhetoric, writing, and technology (pp. 3–24). University Press of Colorado, Utah State University Press. United Robots. (2020). United Robots explained: Why & how what we do is unique. http://united robots.ai/why-were-unique. Vincent, J. (2019, November 7). OpenAI has published the text-generating AI it said was too dangerous to share. The Verge. https://www.theverge.com/2019/11/7/20953040/openai-text-gen eration-ai-gpt-2-full-model-release-1-5b-parameters. Weiser, M. (1991, September). The computer for the 21st century. Scientific American (pp. 94–104). https://www.lri.fr/~mbl/Stanford/CS477/papers/Weiser-SciAm.pdf. Writing. (2020). Wikipedia. https://en.wikipedia.org/wiki/Writing.

Intertext—The Future of Writing and Rhetoric: Pitch by Pitch by Scott …

23

Intertext—The Future of Writing and Rhetoric: Pitch by Pitch by Scott Sundvall, The University of Memphis In major league baseball, a batter has to speculate that a fastball will be thrown and commit to that anticipation in order to hit the fastball, insofar as the speed of the pitch will exceed the batter’s ability to judge the pitch once delivered from the pitcher’s hand. The batter then only has to watch the ball leaving the pitcher’s hand (delivery) and the rotation of its seams (form and content) to know where to swing the bat. This is the only way to hit a fastball. Of course, if the batter speculates that it will be a fastball and it is a changeup, then the batter will swing too early; likewise, if the batter speculates that it will be a changeup and it is a fastball, then the batter will swing too late. In short, the best baseball batters at the highest level are able to do two things: (1) correctly speculate the pitch being thrown, and (2) know how to place the bat in response to the delivery of the pitch. (The count is 1–0). Rhetoric, sometimes referred to as a pitch, is finding new forms of technological delivery. As these technological delivery systems continue to increase in speed, we must commit to a certain speculation—we must be proactive rather than reactive. After all, anybody can correctly call and analyze a strike after it has reached the catcher’s glove, missed, but such analysis does nothing for the batter. We are no longer in the analyst’s booth; we are the batter(s). If there has ever been a need for a shift from hermeneutics (interpretation and analysis) to heuretics (the logic and art of invention), the time is now (Ulmer 1994). Before we can speculate and commit to a pitch (rhetoric), we must know who and what is pitching. Every batter knows this. As emergent technologies constitute the pitcher at hand, we need to remember a few things in our futural, speculative at-bat. (The count is 1–1). First of all, the only way to speculate on the nuances of the game (a pitch thrown to us) is to understand that the game itself is predicated upon futurality. In short, rhetoric and writing (qua mark) punctuate finitude, providing the meaning-formation of spatiality (where the pitch will go) and temporality (velocity of the pitch). Bernard Stiegler (1998, 2009) recovers and rethinks this fundamental point from Jacques Derrida (1998, 2017) and Martin Heidegger (1982, 2008). As we are speculating not only on the next pitch but on the game itself, we must ask ourselves what categorically constitutes “writing,” “rhetoric,” and—but more importantly in—“the future.” If we (continue to) understand writing and rhetoric as an art, or at least a craft, then there exists a human dimension to it that cannot be replaced, regardless the technology. For example, writing always already conditions the possibility of a future: syntax (the game); what the future holds, as afforded by writing, is a potentiality to be written: content (the pitch). Such is why futures thinking relative to rhetoric and writing necessitates a meditation on, and mediation of, what will have become a

24

1 Writing Futures Framework

history, to borrow from Victor Vitanza (1996). This is why I am an odd designated hitter for this at-bat (this intertext chapter), so let me step out of the batter’s box for a moment. (The count is 1–2). Whether human or nonhuman, delivery (the pitch) has been a primary rhetorical concern for rhetoricians and rhetors since (at least) Aristotle (the [in]famous canons of rhetoric). The increasing speed of pitch has caused concern (e.g., Paul Virilio’s [2006] dromology), leading to a general disorientation (Stiegler 2009). For example, Stiegler notes that computational banking mechanisms make globally impacting decisions in nanoseconds—a speed that exceeds a human’s ability to reason, consider, and deliberate. By extension, this gestures, of course, to artificial intelligence (AI). As indicated by Duin and Pedersen, Vilém Flusser (2011) “imagines . . . writing performed by humans that will be erased by the work of ‘artificial intelligences’ that are given the agency to create, inform, remember, and even think” (p. 4). I must disagree with such a speculation (Flusser’s, that is), so let me again step outside of the batter’s box. There is more to being a proactive, speculative hitter in this game, or with this forthcoming pitch, than merely being proactive and speculative. I might correctly guess the next pitch—fastball or otherwise—but that does not mean I can hit it. I also need to know how to bat well, how to play the game. To that end (of an extended metaphor), AI demands attention to allopoiesis and autopoiesis: the former refers to a system that can produce something other than itself; the latter refers to a process that can reproduce itself (Sundvall 2019). In terms of writing and rhetoric, we still only have allopoiesis, meaning even if the pitcher (delivery system) is a machine/computer/robot, such is still and nonetheless programmed (written) by a human. Even if “cyborganic,” machines have no origin (metaphysics), culture/community (ethos/ethics), and/or God (theology and morality), unlike writing and rhetoric, which fundamentally affords such (Haraway 1996). Katherine Hayles’s (1999, 2005) posthumanism extends only as far as the human sense (affect) can follow it—the passions that erupt the exigency of writing and rhetoric in the first place. That is, the desire to hit a home run. (The count is 2–2). When in the batter’s box and speculating, remember that the question is and always has been one of invention and appropriation. The technologies we invent can only be appropriated (in whatever way, for better or worse) for further invention: this is the intersection with writing and rhetoric, the pitch always headed toward us at differing speeds and directions. Big data visualization (Beveridge 2017), code studies (Beck 2016), augmented reality (Greene and Jones 2017; Greene 2017; Ulmer 2004; Tinnell 2017), multimodality (Shipka 2011; Alexander and Rhodes 2014), and object-oriented rhetoric and writing (Brown & Rivers 2014; Reid 2012; Rutherford & Palmeri 2016), all provide novel conceptions of the potential and possible future. But if we can imagine virtual futures with AI, then we must stop ignoring the actual present (and becoming-future) of our current paradigm of thinking: digital

Intertext—The Future of Writing and Rhetoric: Pitch by Pitch by Scott …

25

literacy, for example, ought to be replaced with electracy. As with orality and literacy, electracy constitutes an apparatus shift that necessarily reimagines the paradigm of rhetoric and writing specific to the digital turn/revolution (Ulmer 2004). After all, the previous pitch typically gestures to the next; there is no need to think three pitches ahead if you are going to miss the next/current one. Lest we forget, this game (and all the pitches therein) is only an art and craft from and created by us, if nonetheless delivered through machines. Writing and rhetoric, as themselves techniques, are only with and through technologies (Barnett & Boyle 2016), however emergent and fast approaching. The game is still a pitcher and a batter, and speculation and futures studies are more important now than ever because the pitchers are throwing much faster. But this has always been the case—and always will be the case. The batters just have to get faster, more proactive and speculative themselves. (Full count).

References Alexander, J., & Rhodes, J. (2014). On multimodality: New media in composition studies. National Council Teachers of English. Barnett, S., & Boyle, C. (2016). Rhetoric, through everyday things. University of Alabama Press. Beck, E. (2016, November 22). A theory of persuasive computer algorithms for rhetorical code studies. Enculturation: A Journal of Rhetoric, Writing, and Culture. http://enculturation.net/atheory-of-persuasive-computer-algorithms. Beveridge, A. (2017). Writing through big data: New challenges and possibilities for data-driven arguments. Composition Forum, 37. http://www.compositionforum.com/issue/37/big-data.php. Brown, J., & Rivers, N. (2014). Composing the carpenter’s workshop. O-Zone: A Journal of ObjectOriented Studies, 1(1). Derrida, J. (1998). Of grammatology (G. C. Spivak, Trans.). Johns Hopkins University Press. Derrida, J. (2017). Writing and difference (A. Bass, Trans.). Flusser, V. (2011). Does writing have a future? (N. A. Roth, Trans.). University of Minnesota Press. Greene, J. (2017, April 4). From augmentation to articulation: (Hyper) linking the locations of public writing. Enculturation: A Journal of Rhetoric, Writing, and Culture. http://enculturation. net/from_augmentation_to_articulation. Greene, J., & Jones, M. (2017). Augmented velorutionaries: Digital rhetoric, memorials, and public discourse. Kairos: A Journal of Rhetoric, Technology, and Pedagogy, 22(1). http://kairos.techno rhetoric.net/22.1/topoi/jones-greene/index.html. Haraway, D. (1996). Simians, cyborgs, and women: The reinvention of nature. Free Association Books. Hayles, K. (1999). How we became posthuman: Virtual bodies in cybernetics, literature, and informatics. University of Chicago Press. Hayles, K. (2005). My mother was a computer: Digital subjects and literary texts. University of Chicago Press. Heidegger, M. (1982). On the way to language (P. D. Hertz, Trans.). Harper Press. Heidegger, M. (2008). Being and time (J. Macquarrie, & E. Robinson, Trans.). Harper Press. Reid, A. (2012). What an object-oriented rhetoric has to offer. https://profalexreid.com/2012/10/ 29/what-an-object-oriented-rhetoric-has-to-offer/. Rutherford, K., & Palmeri, J. (2016). The things they left behind: Towards an object-oriented history of composition. In S. Barnett & C. Boyle (Eds.), Rhetoric, through everyday things (pp. 96–107). University of Alabama Press. Shipka, J. (2011). Toward a composition made whole. University of Pittsburgh Press.

26

1 Writing Futures Framework

Stiegler, B. (1998). Technics and time, vol. I: The fault of Epimetheus (R. Beardsworth & G. Collins, Trans.). Stanford University Press. Stiegler, B. (2009). Technics and time, vol. II: Disorientation (S. Barker, Trans.). Stanford University Press. Sundvall, S. (2019). Artificial intelligence. In H. Paul (Ed.), Critical terms in futures studies (pp. 29– 34). Palgrave Macmillan. https://doi.org/10.1007/978-3-030-28987-4_6. Tinnell, J. (2017). Actionable media: Digital communication beyond the desktop. Oxford University Press. Ulmer, G. (1994). Heuretics: The logic of invention. Johns Hopkins University Press. Ulmer, G. (2004). Teletheory. Atropos Press. Virilio, P. (2006). Speed and politics (M. Polizzotti, Trans.). MIT Press. Vitanza, V. (1996). Negation, subjectivity, and the history of rhetoric. State University of New York Press.

Chapter 2

Collaborative Writing Futures

Prompt To begin, please view a video artifact at Fabric of Digital Life, a concept video that showcases Apple products in use by a team that has a “dream” idea. Titled Apple at Work—The Underdogs, the main character accidentally bumps into her boss, miraculously changing a calamity into a challenge. She shares the opportunity with her team, who move with lightning speed to create a prototype just in time for presentation. Everyone collaborates with humans and nonhuman agents—digital assistants, mobile devices—a myriad of helpers to make the dream a reality. Prepare in advance: How might you and your team(s) collaborate with humans and nonhuman agents? What opportunities await as you view writing futures through this collaborative lens? How will you and your team become “technologically embodied” as you collaborate with these agents through these mobile devices?

2.1 How Will Writers Collaborate? Collaboration is a human imperative. Our brains are wired to be super-cooperators, to collaborate (Lieberman, 2013). Theorists, researchers, and practitioners grapple with ever-changing modes and models for such collaborative work in academia, in industry, and with communities. Collaboration is an imperative digital competency in professional and technical communication; communicators must be prepared to use a myriad of tools and techniques to collaborate with engineers, subject matter experts, and programmers; they must be adept at using collaborative software and working with local and global virtual teams. In the Writing Futures framework, we emphasize the need to embrace writing as the dialogic, socio-technological construction of knowledge as one works with human and nonhuman agents (Table 2.1). © The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 A. H. Duin and I. Pedersen, Writing Futures: Collaborative, Algorithmic, Autonomous, Studies in Computational Intelligence 969, https://doi.org/10.1007/978-3-030-70928-0_2

27

28

2 Collaborative Writing Futures

Table 2.1 Writing Futures framework, collaborative writing futures Collaborative writing futures

Social

Literacy

Civic engagement

How will writers (students/colleagues) collaborate with nonhuman agents? Socio-technological construction of knowledge; Technological embodiment; Nonhuman collaborators; Dialogic collaboration

What literacies will writers need to enable constructive, collaborative work with nonhuman agents? Digital literacy capabilities

What civic challenges demand collaborative, constructive social action through and with nonhuman agents? Risks and benefits of machines as teammates; Identifying and instilling civic dimensions across work, assignments, and tools

Collaboration is an imperative across all fields. Duin, Tham, and Pedersen (2021) state that “a search on ‘the science of collaboration’ results in a multitude of articles emphasizing the importance of collaboration across the academy and industry, the increase in demand for those with collaboration skills, and the exponential increase in tools that support collaboration” (pp. 175–176), noting the following: “As research questions increase in complexity and science struggles ‘to swim through big data, major funders, including the National Science Foundation (NSF) and the National Institutes of Health (NIH), are pushing scientists to collaborate more across disciplines, institutions, and even nations under the banner of team science’ (Baker, 2015, p. 639). In the past decade, a new field—the science of team science (SciTS)— has emerged, with its aim ‘to better understand the circumstances that facilitate or hinder effective team-based research and practice and to identify the unique outcomes of these approaches in the areas of productivity, innovation, and translation [of science]’” (Stokols, 2013, p. 4). Today, amid exponential growth in remote work, our device dependence surges forward as nonhuman agents wake and alert us, point us to possibilities, keep our minds soaring, and continue to quench our thirst to collaborate. Publishers such as Elsevier encourage and possibly mandate us to visualize our data and scientific research networks (Elsevier, 2019). As a result, as Jemielniak and Przegalinska (2020) detail, we are part of a collaborative society: Emerging technologies, thanks to their direct collaboration-enabling features and their engagement of much broader populations, act as super-multipliers for many effects of collaboration that would otherwise be less noticeable. We perceive this phenomenon to emanate from what we call collaborative society: an emerging trend that changes the social, cultural, and economic fabric of human organization through technology-fostered cooperative behaviors and interactions. (p. 4)

Increasingly, humans and emerging technologies form interdependent networks. Tsvetkova et al. (2017) conceptualize these networks as human–machine networks

2.1 How Will Writers Collaborate?

29

(HMNs), “assemblages of humans and machines that interact to produce synergistic effects” (p. 1) whose effects “exceed the combined effort of the individual actors [human and machine] involved” (p. 2). Emerging machines, new technologies, enable new forms of collaboration. For example, human and machine collaboration can bring together “government, interest organizations, citizens, smart devices, and sensor networks” (p. 2) to address environmental problems. Ultimately, quality of life depends less on the individual and more on the human–machine collaborative society.

2.1.1 Foundational Scholarship on Collaboration Collaboration and collaborative writing have been studied across decades. As Duin et al. (2021) in their summary of the rhetoric, science, and technology of collaboration write: “It is well established that professional and technical communicators (PTCs) are expected to work in coordination, cooperation, and collaboration with content experts, designers, and developers to build products and test processes. Over the last three decades, research on collaboration has generated a body of scholarship with broad conceptions of collaborative writing, group interactions, and team-based learning (e.g., Bruffee, 1984, 1998; Ede & Lunsford, 1990, 2001; Jones, 2007).” Professional and technical communication scholars borrow collaboration theories from rhetoric and composition scholars who have studied collaboration at the intersection of collaborative writing and learning. “Bruffee’s (1984) influential scholarship emphasizes the usefulness of conversation and collaborative learning in the classroom. Duffy (2014), in his review of the decades of scholarship on collaboration, notes that Bruffee’s ‘conversational imperative’ sets the stage for what is known largely as social constructivist epistemology” (Duin et al., pp. 171–172). Social constructivists in writing studies believe that “individual writers compose not in isolation but as members of communities whose discursive practices constrain the ways they structure meaning” (Nystrand et al., 1993, p. 289). As Duin et al. continue, “The primary assumption behind this learning theory is that social interaction and participation, particularly with instructors, peers, and other members of the knowledge community, have a significant impact on learning” (Chism, 2006; Lave & Wenger, 1991; Wenger, 1998). Social constructivism has served well throughout theoretical work on collaboration. Scholars began to pay attention to the influence of cultural, emotional, and gender factors on rhetoric (Bleich, 1995), arguing that “all writing is inherently collaborative” (Thralls, 1992, p. 79). Pioneering works uncovered how collaboration changes the traditional rhetor–audience relationship; for example, Frank and Bolduc (2010) examined Olbrechts-Tyteca’s collaboration with Perelman in their field-defining magnum opus, The New Rhetoric: A Treatise on Argumentation (1958/1969). The Perelman/Olbrechts-Tyteca partnership produced a groundbreaking audience theory, while also revealing the complexity of collaboration in terms of the author/rhetor’s agency and relationships that “defy rigid classifications

30

2 Collaborative Writing Futures

and proscribed roles” from the perspective of the audience (Frank & Bolduc, 2010, p. 160). Of particular note is the emphasis on dialectic by collaborative scholars Ede and Lunsford (1984, 1985, 1990, 2001, 2009; Lunsford & Ede, 2011). They invoked a “dialogic” collaboration, one that focuses on the dialectical tensions in the collaboration process. This dialogic approach focuses on roles and processes rather than the end product (Qualley & Chiseri-Strater, 1994). Together, this foundational scholarship on collaboration points to collaboration as a dialogic conversation in which participant agency along with relationship to the collaboration defies rigid classifications and prescribed roles. Although fluid, this dialogic approach toward collaboration continues to focus on roles and processes over the end product—in other words, to focus on the rhetorical situation. In short, collaboration is viewed as being socially constructed.

2.1.2 Socio-technological Construction of Knowledge Recall the Fabric artifact (video) prompt earlier in this chapter. As we expand notions of collaboration to include simultaneous work with human and nonhuman actors, how is such collaboration constructed? Foucault (1998), in “What Is an Author,” reexamined writing practice to consider how an author functions within “a system of dependencies” (p. 221). Such dependencies are part of a system of practice that extends an author’s capacities; these capacities emerge from the relation of author and the many dependencies surrounding composing. The author is never alone; rather, the author is dependent upon other things in collaboration. Latour’s (2005) actor–network theory (ANT), a theoretical and methodological approach, is useful for the study of such a system of dependencies. From an ANT perspective, objects, ideas, and processes are just as important in collaboration as humans are. Humans and nonhuman actors are active participants because “anything that does modify a state of affairs by making a difference is an actor” (p. 71). In this way, “ANT is not about explaining pre-existing social relations, as the cause of agency. Rather ANT suggests that agency develops out of relations” (p. 72). In professional and technical communication, “technological embodiment” offers an integrated vision of agency developed out of such relationships, of human and nonhuman device collaboration in which “technology, the body, and its actions become technologically embodied” (Melonçon, 2013, p. 68). Melonçon writes, “For the technical communicator, a malleable body that can be remade through technologies is more than a manifestation of cyborg, but rather the manifestation of a complex user, which can have wide ranging impacts on some of the most basic work of technical communication” (p. 68). As Mara and Hawk (2010) attest, “As organizations become more complex, technologies more pervasive, and rhetorical intent

2.1 How Will Writers Collaborate?

31

more diverse, it is no longer tenable to divide the world into human choice and technological or environmental determinism. Professional and technical communication is a field that is perfectly situated to address these concerns” (p. 3). As one example, a prominent PTC scholar, Kennedy (2017), examined everyday collaboration in her use of digital hearing aids, studying “the design of interfaces and systems that facilitate human/machine collaboration in medical contexts” (p. 41). Her work explores the ways that human–nonhuman collaboration shapes communicative actions and interactive behaviors, especially given automated agents for sound streaming, geolocation, and sound adjustment. She writes, By assuming that a collaborative and integrated relationship will develop between human and nonhuman, we can better design essential wearables as portals for social connectedness, information, convenience, and in the case of hearing aids, communication. Doing so requires moving beyond design processes that construct wearers and interfaces as separate agents with scheduled points of contact during early development and then not again until the usability testing stage. (p. 41)

Kennedy describes her process of learning to work closely with a machine, of becoming a “hybrid” as she and the device collaborate: This collaborative learning process, which transpires over months and years, is partially driven by the human wearer but also partially driven by artificial intelligence and algorithms as data accretes, the hearing aid itself learns, and its settings are actively tweaked by humans so that they more closely align with the human wearer’s hearing range and sound expectations… . In this formulation, the hearing aid and its wearer are a hybrid, actors working closely and transformatively together in order to receive, automatically transform, and then cognitively process information. Far from being overlooked, the wearer functions as the cognitive actor in this scenario, processing the information that has been relayed by algorithm to machine to algorithm to ear to mind. This promised human/machine integration is rhetorically positioned as a portal to communication, to social connectedness, to information, and to convenience… . The close human-machine integration required to be a successful wearer of these aids can lead to a collaborative relationship between human and machine as the wearer learns to work very closely with multiple machine agents that include hardware, directional mics, multiple processing algorithms, satellite telemetry, and an iPhone with an app that facilitates control of multiple functions as well as acts as a central element for geolocation telemetry. (pp. 41, 44, 49)

Clark (2019), professor of logic and metaphysics at the University of Edinburgh, also explores this process of learning to work closely with a machine, naming this vision a “Centaur future” in which humans will be increasingly augmented: “In this hybrid future, AI-augmented humans will work with deeper, more powerful AIs, creating a world defined by layers upon layers of human-AI partnership.” He emphasizes the “real danger” as misunderstanding: the machine must know enough about us to constantly estimate its own certainty or uncertainty, and to do so requires anthropology and communication studies focused on “mutual estimations of uncertainty in human-AI interaction.” In Duin et al.’s (2016) earlier study of “Wearable Computing, Wearable Composing,” they also explored the close human–machine integration as students

32

2 Collaborative Writing Futures

used the Google smartglass device. Deployed as a means to envision new dimensions in composition pedagogy and to prepare students for workplaces where such technology and systems may well be commonplace, Duin et al. found that student use of Google Glass devices resulted in the human body becoming “another point or landmark in a sociotechnical process.” Similar to Kennedy’s description of the collaborative relationship between human and machine, they found the Google Glass interface required users to master a unique set of gestural and verbal commands; wearers activate Glass functions with voice commands or by making a hand gesture—fingertip to right temple where the touchpad is located. This gesture complicates the rhetorical situation by activating the device while simultaneously suggesting that deep concentration is underway. Thus, while wearing of the smartglass device promoted human–machine collaboration, it interfered with proximate human–human collaboration; the Glass interface disconnected the user from the social present as shown in Fig. 2.1. In the end, this early version of a smartglass device did not support simultaneous copresent relationships with both humans and machines. In their recent collection on Embodied Computing, Pedersen and Iliadis (2020) emphasize that “embodied computing is also about embodying and the process of incorporating technology in a manner that reaches beyond copresent relationships with media devices” (p. xvii); “as sociotechnical systems, embodied computing technologies instigate social, critical, medial, and political relationships that we organize as ambient, topographical, and visceral” (p. xix). “As embodied technologies are connected to the internet, the human body becomes another point or landmark in a sociotechnical process” (p. xxiii). In her solo chapter, Pedersen (2020) emphasizes that “the idea of a networked body working autonomously through data assemblages seems less futuristic than before” (p. 39). She investigates “how the body is imposed

Fig. 2.1 Deploying Glass in a technical communication course. The design of Glass disrupts human–human collaboration (Photo permission: Wearables Research Collaboratory, University of Minnesota)

2.1 How Will Writers Collaborate?

33

upon to become a platform across a series of technologies that are increasingly interdependent” (p. 23). This focus on such body networks illustrates how “bodies will participate in cooperative relationships with other human and nonhuman actors and digital infrastructures” (p. 25). In this same collection, Lupton (2020) describes such a device known as Blush developed earlier by the New York Times Labs. Worn as a brooch and designed to listen to conversations with and around the wearer, Blush lights up when a conversation refers to topics that the user has listed in the associated app. Lupton draws on feminist new materialism theory to analyze the interplay between socio-technological imaginaries and the relational, dare we say collaborative, connections between them. She focuses on understanding how wearable devices come together with humans and other nonhuman actors to “generate dynamic human-nonhuman assemblages that create specific agential capacities that are distributed between the humans and nonhumans involved” (p. 51). As a result, a “vitality” is generated, a “thing-power” (Bennett, 2004, p. 348) of “lively forces and agential capacities that are generated when humans are entangled with nonhumans” (p. 51). Lupton takes care to point out the challenges that such “agential capacities” reveal in terms of personal data then being connected to cloud computing databases and thus open to use by device developers, agencies for possible selling of the data, and “potentially surveillance agencies and cybercriminals” (p. 63). We too address these issues in later chapters. In a blog posting, multidisciplinary designer Liu (2019) writes about her creation of five machines as a means to design experiential future scenarios that address the need to view machines as collaborative teammates. We encourage readers to view her video (shown in Fig. 2.2) embedded in her blog posting. She asks, “Are we making alliance[s] with machines or a separation? Are we making them alien or making kin?

Fig. 2.2 Yuxi Liu’s video of her exploration through Five Machines (Photo permission: Yuxi Liu)

34

2 Collaborative Writing Futures

What can an alternative narrative be to establish a new mode of human-machine coevolution?” Liu’s Five Machines work explores alternative narratives through her design and use of five intelligent, and we would add collaborative, prototypes in which each has a different intent, goal, and provocation “as a means to propose design principles for establishing new models of human-machine interaction.” She writes, Our views of machines are strongly anthropocentric and often permeated by ideas of dominance. On one hand, machines are viewed as merely tools to perform intended human actions. On the other, they are regarded as dangerous, overriding human capabilities. However, the spread of emerging technologies such as artificial intelligence and machine learning, begins to challenge these perspectives. Machines are increasingly empowered to learn, reason, and make decisions, moving away from the role of passive objects into the position of active subjects.

In short, embodied technologies actively learn our human patterns and automatically recalibrate settings over time. As Pedersen (2013) writes, “Wearable media sit midway between media you carry (e.g. laptops, Blackberrys, memory sticks) and media you become (e.g., devices implanted in the body, future nanotechnological manipulation, prostheses)” (p. 4). However, as Pedersen and Iliadis (2020) note, “The data generated by embodied computing devices have the potential to alter relationships and social structures within multiple environments. As people move beyond the continuum of wearables, embodied computing and data-blended bodies produce various levels of abstraction, meaning, and framing that can be utilized for a variety of ends at multiple moments in time” (p. xxv). In addition, professional and technical communication scholars McKee and Porter (2017) in their recent examination of networked interactions urge us to be aware of the rhetorical situation in professional communication practices mediated by social networks and “smart” or assistive technologies such as artificial intelligence (AI) agents. Prompt Consider how you collaborate with autonomous agents such as smartwatches. Will wearing smartrings become more collaborative than current smartwatch use? Amazon’s Echo Loop has a microphone and speaker, so you hold it up to your mouth to speak to Alexa and hear its responses. The McLear smartring works to make secure contactless payments, and the NFC OMNI ring is designed for experienced programmers. The ORII is billed as “the ultimate voice assistant” ring, allowing one to whisper or gesture to control smart appliances, using bone conduction transducers to convert electrical signals into vibrations into one’s inner ear. Apple has multiple patents as it brings its product to the marketplace. Which device promises the best assistance for your writing futures?

2.1 How Will Writers Collaborate?

35

2.1.3 Collaborative Workspaces In a 2020 study of 20 professional and technical communicators’ identity, literacy, and collaboration (Duin & Breuch, in press), every person interviewed emphasized the importance and integration of tools for remote collaboration. One PTC, an information architect, shared, There’s a sea change happening with collaboration. Especially before coronavirus, more and more people were working remotely. And it’s kind of a tipping point here where you know, all these people who never worked from home are now being compelled to do so. And the timing has been very fortuitous, because we have these tools like Zoom and Microsoft Teams and Slack, they’ve kind of been in this, so we’re kind of on the leading edge of the bell curve.

We assume that readers are quite familiar with tools that assist with remote collaboration, e.g., Skype, Google Meet, Zoom, and Microsoft Teams along with its new Together Mode. Team collaboration platforms (TCPs) such as Slack and Trello support team collaboration. Here we highlight Anders’s (2016) study of Slack used by 1 million people at the time of his study, and now in 2020 used by over 12 million people a day across all types of industries and organizations. These platforms integrate multiple media in support of collaborative work, and conversations are organized into groups for specific teams and projects and channels for knowledge sharing and topic-based communication. These collaborative workspaces also include notifications managed by the team member as well as mentions or alerts that team members can send to others. Users can integrate services like Google Drive and Dropbox or various video-conferencing services. The overall design makes communication and collaboration visible, searchable, and available across organizational boundaries. In Anders’s analysis of 100 self-published blog posts by Slack users, he found it supported knowledge sharing and collaborative workflows: “The communication visibility afforded by TCPs … had direct impacts on collaboration processes. Users noted that communication visibility—especially when supported by compartmentalization of groups, projects, and topics—enabled more distributed and self-organized styles of collaboration” (p. 247) The use of Slack also resulted in greater engagement and presence, context awareness, generative role taking, leadership awareness, and synchronicity. As Anders quotes a user, “‘It [Slack] compresses a lot of the stuff you might otherwise do in meetings into a Slack channel, so that information is visible to everyone it should be visible to, and it saves people time: They don’t necessarily have to meet but can stay updated on a project’s status’” (p. 252, and quoted in Duin et al., 2021, p. 181). In terms of visibility, emerging systems such as Creately provide a “visual canvas” that emulates collaborator interaction, increasingly moving away from reliance on text and toward greater visualization of overall workflow. Furthermore, writing futures increasingly will take advantage of emerging augmented and virtual realities—AR and VR—that allow teams to collaborate from anywhere with greater affordances. For example, Spatial (see Fig. 2.3) uses the space around a team member to create a sharable augmented “remote” workplace where teams can collaborate, search, brainstorm, and share content (see this video at the Fabric site). Spatial also uses HoloLens 2 and other technologies to work together from different locations (see this video at the Fabric site).

36

2 Collaborative Writing Futures

Fig. 2.3 Collaborative workspace created with the use of Spatial (Photo permission: Spatial Systems, Inc.)

As we consider writing futures, we must recognize our increased collaboration with AI agents and nonhuman collaborators (discussed in more detail in later chapters). In industry, Microsoft, Salesforce, and Oracle have integrated AI into their enterprise collaboration platforms including Slack (Fluckinger, 2019). In a Harvard Business Review article on collaborative intelligence, Wilson and Daugherty (2018) found from their research of 1,500 companies “that firms achieve the most significant performance improvements when humans and machines work together. Through such collaborative intelligence, humans and AI actively enhance each other’s complementary strengths: the leadership, teamwork, creativity, and social skills of the former, and the speed, scalability, and quantitative capabilities of the latter” (p. 117). A recent Deloitte analysis further supports this theme, finding “superteams” in which AI is integrated into teams “to produce transformative business results,” with 70% of respondents reporting exploration and/or use of AI (Volini et al. 2020). AI agents embedded in humanoid robots will join the growing cohort of collaborative team members in myriad relationships in the future; most are still in research and development today. In 2016, Hanson Robotics created Sophia, a humanoid robot that has AI capabilities in natural language processing, allowing her to respond in meaningful ways and appear lifelike (see Fig. 2.4). Her pre-built facial expressions expand her ability to emote. The goal is to develop her into a sentient superintelligence that can fulfill multiple functions. While Sophia’s capabilities are accused of being hyped and overblown, we argue that understanding Sophia as an emergent concept, whether or not she achieves the kind of sophisticated AI techniques claimed for her, is still key for the field. Robots with affective capabilities and an evolving capability to perform meaningful dialog will not come to market quickly, but they

2.1 How Will Writers Collaborate?

37

Fig. 2.4 Sophia, a humanoid robot with AI capabilities (Image permission: ITU Pictures, CC by 2.0)

provide a glimpse of future collaborative workspaces. Building collaborative intelligence for such writing futures is imperative. We contend that doing so begins with building digital literacy. Prompt Visit the Spatial site and consider how you might collaborate when your room is your monitor and your hands become the mouse. Create your avatar, name your team and begin to write together in the collaborative space you design.

2.2 What Digital Literacies Will Writers Need to Enable Constructive, Collaborative Work with Nonhuman Agents? Emerging technologies continue to be broadly and rapidly embraced due to their promise of increased efficiency and the allure of personalized data. Massive amounts of data are collected, mined, and used to alter our behavior. Unfortunately, public understanding of immersive technologies has not kept pace with the full potential of their benefits and perils. Professional and technical communicators must examine what it means to be digitally literate in today’s world. We must ask: How should

38

2 Collaborative Writing Futures

writers work to build digital literacy for a global, digitally connected, always collaborative “social” world? What literacies will writers need to enable constructive, collaborative work with nonhuman agents? The field of professional and technical communication has primarily focused on technological literacy, and most recently, on code literacy as a means to prepare students for writing futures (Duin & Tham, 2018). Selber’s (2004) initial work to reimagine computer literacy through functional literacy (students as effective users of technology), critical literacy agents (students as informed questioners of technology), and rhetorical literacy (students as reflective producers of technology) has provided a solid framework for organizing local environments that “integrate technology meaningfully and appropriately” (p. 1). Hovde and Renguette (2017), drawing on the work of Selber and other professional and technical communication scholars who have addressed technological literacy (Breuch, 2002; Brumberger et al., 2013; Cargile Cook, 2002; Northcut & Brumberger, 2010; Turnley, 2007), consolidate subsequent scholarship into functional, conceptual, evaluative, and critical levels of technological literacy. PTC instructional models designed for the purpose of increasing digital literacy are largely situated within the local confines of the academic, industrial, or civic environment. No comprehensive model exists for reimaging digital literacy through collaboration with nonhuman actors as a means to increase understanding of immersive/embodied human–technology interaction. Given the immense evolution of technologies since Selber’s work in 2004, such reimagining of digital literacy also must include “making ethically informed choices and decisions about digital behaviour … digital safety, digital rights, digital property, digital identity and digital privacy” (Traxler, 2018, p. 4). Outside the field of professional and technical communication, Stordy (2013) articulates digital literacy as “The abilities a person or social group draws upon when interacting with digital technologies to derive or produce meaning, and the social, learning and work-related practices that these abilities are applied to” (p. 472). An expanded definition of being digitally literate, as mentioned above, “implies making ethically informed choices and decisions about digital behaviour … digital safety, digital rights, digital property, digital identity and digital privacy” (Traxler, 2018, p. 4). Noting the challenge of “navigating varied definitions for digital literacy” (p. 91), after reviewing iterations of “digital literacy” definitions from the mid-1990s onward, Ferrar (2019) shares Virginia Tech University’s use of the Joint Information Systems Committee (JISC) Digital Capability Framework (2019a, 2019b), developed in the UK, as particularly influential to understanding and fostering digital literacy. Through an extensive review of articles, reports, frameworks, specifications, and standards as well as interviews, JISC leadership identified key issues in framing how to deepen digital know-how, defining digital literacies as “the capabilities which fit someone for living, learning and working in a digital society.” In this framework, shown in Fig. 2.5, digital literacy capabilities include ICT proficiency (functional skills); information, data, and media literacies (critical use); digital creation, problem solving, and innovation (creative production); digital communication, collaboration,

2.2 What Digital Literacies Will Writers Need …

39

Fig. 2.5 JISC digital literacy capabilities (Image permission: JISC)

and partnership (participation); digital learning and development (development); and digital identity and wellbeing (self-actualizing). The JISC digital literacy capabilities are designed for use in curricular and staff development or in designing digital badges for demonstration of certain digital competencies. Each capacity is clearly defined; for example, ICT proficiency includes “the confident adoption of new devices, applications, software and services and the capacity to stay up to date with ICT as it evolves” and “an understanding of how digital technology is changing practices at work, at home, in social and in public life.” While each component includes great detail, and mention is made of the capacity to “collate, manage, access and use digital data in spreadsheets, databases and other formats” and to understand “how personal data may be collected and used,” no component includes the capability to enable constructive, collaborative work with nonhuman agents. Digital literacy here consists of the constructive use of machines, and although

40

2 Collaborative Writing Futures

media literacy includes an understanding of digital media as “a social, political and educational tool,” this capacity focuses mainly on receiving and responding to text, graphical, video, animation, and audio media. According to Gourlay and Oliver (2016), such use of JISC and other frameworks that seek to define digital literacy “based on capabilities or features of learners” may lose sight “of important aspects of student engagement with technologies” (p. 78). Gourlay and Oliver prefer the European Union’s DigEuLit project definition provided by Martin and Grudziecki (2006): Digital Literacy is the awareness, attitude and ability of individuals to appropriately use digital tools and facilities to identify, access, manage, integrate, evaluate, analyse and synthesize digital resources, construct new knowledge, create media expressions, and communicate with others, in the context of specific life situations, in order to enable constructive social action; and to reflect upon this process. (p. 255)

Although an improved, more expansive social definition, this definition again focuses on the ability “to appropriately use” digital tools; it does not focus on collaborating “with” such tools. In contrast, Kelly (2019) explores how recent augmented reality (AR) developments will eventually create a “mirrorworld” of the physical space and objects around us, a world of “virtual fragments stitched together to form a shared, persistent place that will parallel the real world.” In terms of digital literacy, Kelly writes, “Just as past generations gained textual literacy in school, learning how to master the written word, from alphabets to indexes, the next generation will master visual literacy. A properly educated person will be able to create a 3D image inside of a 3D landscape nearly as fast as one can type today… . It will be the Photonic Era.” A mirrorworld will be achieved through an augmented reality cloud (AR cloud), discussed in science and technology forums as well as the popular science press. Therefore, digital literacy for writing futures means no longer viewing human and machine as separate agents along with the ability to envision and write within mirrorworlds of virtual fragments stitched together. It requires reaching beyond current copresent relationships with agents through devices and fostering the ability to • Expand digital literacy to include visual literacy through work with augmented and virtual reality; • Understand technological embodiment, agency developed out of human and nonhuman collaboration—for example, how one’s writing capacities are extended through and with device collaboration; • Identify collaboration-enabling features—for example, articulating the ways that human–nonhuman collaboration shapes communicative actions and interactive behaviors; • View human–machine collaboration as a dialogic conversation, assuming that a collaborative, integrated relationship will develop between human and nonhuman agent, leading to greater knowledge by both human and agent; and • Project how one’s body adapts to networks, assemblages, or even as a host for future human and nonhuman collaboration in ambient interactive relationships.

2.2 What Digital Literacies Will Writers Need …

41

As Pedersen (2020) notes, “body networks will hyper accelerate embodied computing adoption, which in turn, instigates adaptation” (p. 25). Quoting Hayles’s (2012) discussion of technogenesis, Pedersen emphasizes that the future is “about adaptation, the fit between organisms and their environments, recognizing that both sides of the engagement (human and technologies) are undergoing coordinated transformations” (p. 81). As we commingle our cognitive space with technology (Fitz & Reiner, 2016), we might well use the “vitality” subsequently generated as a means to address civic challenges. Prompt Again, view the opening video. Consider how each character’s capacities become extended through and with device collaboration. Identify how each team member uses devices to shape communicative actions and interactive behaviors. View each member’s world through the potential creation of a “mirrorworld” for collaborating with a “shared, persistent place that will parallel the real world” (Kelly, 2019). Note how each team member’s human–machine collaboration is a dialogic conversation, and project how each member’s body might become a network to host human and nonhuman collaboration.

2.3 What Civic Challenges Demand Collaborative, Constructive Social Action Through and with Nonhuman Agents? Seeber and colleagues (2020), in their recent international initiative by 65 collaboration scientists to develop a research agenda on the risks and benefits of machines as teammates, begin with the following scenario: A typhoon has knocked out the infrastructure of a small nation. Hundreds of thousands of people in hard-to-reach places lack food, water, power, and medical care. The situation is complex—solutions that address one challenge create new ones. To find a workable solution, your emergency response team must balance hundreds of physical, social, political, emotional, and ethical considerations. It is mind-boggling to keep track of all the competing concerns. One teammate, though, seems to have a special talent for assessing the many implications of a proposed course of action. She remembers the legal limitations of the governor’s emergency powers, locations of key emergency supplies, and every step of the various emergency procedures for hospitals, schools, and zoos. Her insightful suggestions contribute to the team’s success in saving thousands of lives. But she is not human; she is a machine. (p. 1)

Innumerable civic challenges demand collaborative, constructive social action through and with nonhuman agents. The most striking challenges include those currently stemming from pandemics, climate change, and structural racism. While no device at present has the ability to “collaborate” across human dimensions for “forming, norming, storming, and performing” as part of teams, use of emerging

42

2 Collaborative Writing Futures

technologies can contribute to improved knowledge sharing, task performance, satisfaction with process and outcomes, and shared understanding (Bittner & Leimeister, 2014). A leading technical communicator at Boston Scientific recently shared with us how important it is for their teams to “utilize a technological tool to organize data and information that allows you to write about it.” She described her need to constantly look for technological tools that help their work; she emphasized the importance of becoming literate in those tools to handle the volumes of information: “It’s so voluminous that if you don’t find a technological tool to manage it, you really just can’t keep up. We’re trying to scrape scientific literature right now through AI tools so that we can spend less time reading a lot of literature and more time writing about it.” These tools have become their collaborative teammates as they seek to scour literature in support of work to design scientific instruments. According to Seeber et al., for machines to be effective teammates, “they will need to be more capable than today’s chatbots, social robots, or digital assistants that support team collaboration. They will need to engage in at least some of the steps in a complex problem solving process, i.e., defining a problem, identifying root causes, proposing and evaluating solutions, choosing among options, making plans, taking actions, learning from past interactions, and participating in after-action reviews. Such machine partners would have the potential to considerably enhance team collaboration” (pp. 1–2). In terms of work practice design, Seeber et al. posit these future research questions surrounding collaboration and decision-making: • How can we engage machine teammates in collaboration processes? • How can we systematically design machines as teammates in a human-centric way? • How ready are our tools and techniques for engineering collaborative processes for modeling future collaborative processes? (p. 6) How might we envision writing futures that position collaboration with nonhuman agents as a means to incite change—real rhetorical and material change—across our classrooms, industries, and communities? What constructive social action might be deployed through and with nonhuman agents through devices? Consider remote health. Soon your doctor will be able to wirelessly track your health—even through walls. Imagine a box, similar to a Wi-Fi router, that sits in your home and tracks all kinds of physiological signals as you move from room to room: breathing, heart rate, sleep, gait, and more. Dina Katabi, a professor of electrical engineering and computer science at MIT, built such a box in her lab. And in the not-so-distant future, she believes, it will be able to replace the array of expensive, bulky, uncomfortable gear we currently need to get clinical data about the body. At the time of this writing, the COVID-19 pandemic rages on, and timely translations of information into languages other than English are urgently needed. To meet this need, IVANNOVATION Language Management added a service based on its new COVID-19 Response Translation System (CRTS) to meet the new coronavirus crisis facing health providers and the public at large. It has developed specialized COVID-19 language resources based on a massive collection of bilingual health

2.3 What Civic Challenges Demand Collaborative …

43

materials in a number of languages. These resources used by human medical translators will lower the cost and increase the speed of translating COVID-19-related materials. IVANNOVATION has taken this step due to the serious lack of COVID19 information for America’s population of approximately 25 million people with low English proficiency. Consider remote learning and how AR and VR technologies such as Spatial might be used as instructors and students share augmented “remote” workspaces for brainstorming and developing, and sharing content; see the video “Spatial—Collaborate from Anywhere in AR” at the Fabric site. Consider the difficulty in writing for one who has tremors from Parkinson’s disease; see the invention that helped Emma Lawton to write again. In Chap. 4, “Autonomous Writing Futures,” we discuss AI writing in detail, noting how cowriting content with AI is eclipsing the older notion of AI assistantship. Increasingly, writing futures will involve a much greater breadth of writing products, including research platforms, grammar and tone editing websites and apps, text summarizers, content automation, and, finally, full-scale AI article authoring. All of these renditions involve collaboration. As part of the Writing Futures framework, we ask readers to abandon nostalgic notions of solo proprietary authorship and to embrace writing as dialogic, sociotechnological construction of knowledge. The core guiding principle is collaboration as readers work with human and nonhuman collaborators. We challenge readers to focus on enabling constructive, collaborative social action to foster writing futures that address grand challenges. To do so requires integral understanding of algorithms, analytics, and artificial intelligence.

References Anders, A. (2016). Team communication platforms and emergent social collaboration practices. International Journal of Business Communication, 53(2), 224–261. Baker, B. (2015). The science of team science. BioScience, 65(7), 639–644. Bennett, J. (2004). The force of things: Steps toward an ecology of matter. Political Theory, 32(3), 347–372. Bittner, E. A. C., & Leimeister, J. M. (2014). Creating shared understanding in heterogeneous work groups: Why it matters and how to achieve it. Journal of Management Information Systems, 31(1), 111–144. https://doi.org/10.2753/MIS0742-1222310106. Bleich, D. (1995). Collaboration and the pedagogy of disclosure. College English, 57(1), 43–61. Breuch, L. K. (2002). Thinking critically about technological literacy: Developing a framework to guide computer pedagogy in technical communication. Technical Communication Quarterly, 11(3), 267–288. Bruffee, K. A. (1984). Collaborative learning and the conversation of mankind. College English, 46(7), 635–652. Bruffee, K. A. (1998). Collaborative learning: Higher education, interdependence, and the authority of knowledge. Johns Hopkins University Press. Brumberger, E. R., Lauer, C., & Northcut, K. (2013). Technological literacy in the visual communication classroom: Reconciling principles and practice for the “whole” communicator. Programmatic Perspectives, 5(2), 171–196.

44

2 Collaborative Writing Futures

Cargile Cook, K. (2002). Layered literacies: A theoretical frame for technical communication pedagogy. Technical Communication Quarterly, 11(1), 5–29. Chism, N. (2006). Challenging traditional assumptions and rethinking learning spaces. In D. G. Oblinger (Ed.), Learning spaces (pp. 2.1–2.12). Educause. Clark, A. (2019, March 25). Human AI collaboration: How the rise of AI can also be the rise of us. Telegraph. https://www.telegraph.co.uk/technology/information-age/human-ai-collaboration/. Duffy, W. (2014). Collaboration (in) theory: Reworking the social turn’s conversational imperative. College English, 76(5), 416–435. Duin, A. H., & Breuch, L. K. (in press). Writer identity, literacy, and collaboration: 20 technical communication leaders in 2020. In L. Arduser (Ed.), Workplace writing. CSU Press TPC Foundations and Innovations series. Colorado State University Open Press. Duin, A. H., & Tham, J. (2018). Cultivating code literacy: A case study of course redesign through advisory board engagement. ACM SIGDOC, Communication Design Quarterly, 6(3), 44–58. https://www.dropbox.com/s/949ncc22td6xf7c/CDQ_6.3_2018.pdf?dl=0. Duin, A. H., Moses, J., McGrath, M., & Tham, J. (2016). Wearable computing, wearable composing: New dimensions in composition pedagogy. Computers and Composition Online. http://cconlinej ournal.org/wearable/. Duin, A. H., Tham, J., & Pedersen, I. (2021). The rhetoric, science, and technology of 21st century collaboration. In M. Klein (Ed.), Effective teaching of technical communication: Theory, practice and application (pp. 169–192). ATTW Book Series in Technical and Professional Communication. Routledge. Ede, L., & Lunsford, A. A. (1984). Audience addressed/audience invoked: The role of audience in composition theory and pedagogy. College Composition and Communication, 35(2), 155–171. Ede, L., & Lunsford, A. A. (1985). Let them write—Together. English Quarterly, 18, 119–127. Ede, L., & Lunsford, A. A. (1990). Singular texts/plural authors. Southern Illinois University Press. Ede, L., & Lunsford, A. A. (2001). Collaboration and concepts of authorship. PMLA, 116(2), 354–369. Ede, L., & Lunsford, A. A. (2009). Among the audience: On audience in an age of new literacies. In M. E. Weiser, B. M. Fehler, & A. M. Gonzalez (Eds.), Engaging audience: Writing in an age of new literacies (pp. 42–72). NCTE. Elsevier. (2019). Data visualization. https://www.elsevier.com/authors/author-resources/data-visual ization. Ferrar, J. (2019). Development of a framework for digital literacy. Reference Services Review, 47(2), 91–105. Fitz, N. S., & Reiner, P. B. (2016). Perspective: Time to expand the mind. Nature, 531(S9). https:// www.nature.com/articles/531S9a.pdf. Fluckinger, D. (2019). AI in enterprise collaboration platforms: A comparison. Search Content Management. https://searchcontentmanagement.techtarget.com/ehandbook/Artificialintelligence-meets-enterprise-collaboration-systems. Foucault, M. (1998). Essential Works of Foucault (1954–84) (Vol. 2). Aesthetics, Method & Epistemology. The New Press. Frank, D. A., & Bolduc, M. (2010). Lucie Olbrechts-Tyteca’s New Rhetoric. Quarterly Journal of Speech, 96(2), 141–163. Gourlay, L., & Oliver, M. (2016). It’s not all about the learner: Reframing students’ digital literacy as sociomaterial practice. In R. Ryberg et al. (Eds.), Research, boundaries, and policy in networked learning (pp. 77–92). Springer. Hanson Robotics Limited. (2016). Sophia Awakens—Episode 1 [Video]. YouTube. https://www. youtube.com/watch?v=LguXfHKsa0c. Hayles, N. K. (2012). How we think: Digital media and contemporary technogenesis. University of Chicago Press. Hovde, M. R., & Renguette, C. C. (2017). Technological literacy: A framework for teaching technical communication software tools. Technical Communication Quarterly, 26(4), 395–411. Jemielniak, D., & Przegalinska, A. (2020). Collaborative society. MIT Press.

References

45

JISC. (2019a). Jisc digital capabilities framework: The six elements defined. http://repository.jisc. ac.uk/7278/1/BDCP-DC-Framework-Individual-6E-110319.pdf. JISC. (2019b). What is digital capability? https://digitalcapability.jisc.ac.uk/what-is-digital-capabi lity/. Jones, S. L. (2007). How we collaborate: Reported frequency of technical communicators’ collaborative writing activities. Technical Communication, 54(3), 283–294. Kelly, K. (2019, February 12). AR will spark the next big tech platform—Call it mirrorworld. Wired. https://www.wired.com/story/mirrorworld-ar-next-big-tech-platform/. Kennedy, K. (2017). Designing for human-machine collaboration: Smart hearing aids as wearable technologies. Communication Design Quarterly, 5(4), 40–51. Latour, B. (2005). Reassembling the social: An introduction to actor-network-theory. New York: Oxford University Press. Lave, J., & Wenger, E. (1991). Situated learning: Legitimate peripheral participation. Cambridge University Press. Lieberman, M. D. (2013). Social: Why our brains are wired to connect. Crown. Liu, Y. (2019). Designing alternative narratives of human-machine coexistence: An exploration through five machines. https://blog.prototypr.io/designing-alternative-narratives-of-human-mac hine-coexistence-df3743fb4b0d. Lunsford, A. A., & Ede, L. (2011). Writing together: Collaboration in theory and practice: A critical sourcebook. Bedford/St: Martin’s. Lupton, D. (2020). Wearable devices: Sociotechnical imaginaries and agential capacities. In I. Pedersen & A. Iliadis (Eds.), Embodied computing: Wearables, implantables, embeddables, ingestibles (pp. 49–69). MIT Press. Mara, A., & Hawk, B. (2010). Posthuman rhetorics and technical communication. Technical Communication Quarterly, 19(1), 1–11. Martin, A., & Grudziecki, J. (2006). DigEuLit: Concepts and tools for digital literacy development. Innovation in Teaching and Learning in Information and Computer Sciences, 5(4), 249–267. McKee, H., & Porter, J. (2017). Professional communication and network interaction: A rhetorical and ethical approach. Routledge. Melonçon, L. (2013). Toward a theory of technological embodiment. In L. Melonçon (Ed.), Rhetorical accessibility: At the intersection of technical communication and disability studies (pp. 67–82.). Baywood. Northcut, K. M., & Brumberger, E. R. (2010). Resisting the lure of technology-driven design: Pedagogical approaches to visual communication. Journal of Technical Writing and Communication, 40(4), 459–471. Nystrand, M., Greene, S., & Wiemelt, J. (1993). Where did composition studies come from? An intellectual history. Written Communication, 10, 267–333. Pedersen, I. (2013). Ready to wear: A rhetoric of wearable computers and reality-shifting media. Anderson, SC: Parlor Press. Pedersen, I. (2020). Will the body become a platform? Body networks, datafied bodies, and AI futures. In I. Pedersen & A. Iliadis (Eds.), Embodied computing: Wearables, implantables, embeddables, ingestibles (pp. 21–47). MIT Press. Pedersen, I., & Iliadis, A. (Eds.). (2020). Embodied computing: Wearables, implantables, embeddables, ingestibles. MIT Press. Perelman, C., & Olbrechts-Tyteca, L. (1958/1969). The new rhetoric: A treatise on argumentation. University of Notre Dame. Qualley, D. & Chiseri-Strater, E. (1994). Collaboration as reflexive dialogue: A knowing “deeper than reason.” JAC: Journal of Advanced Composition, 14(1), 111–130. Seeber, I., Bittner, E., Briggs, R. O., de Vreede, T., de Vreede, G.-J., Elkins, A., Maier, R., Merz, A. B., Oeste-Reiss, S., Randrup, N., Schwabe, G., & Sollner, M. (2020). Machines as teammates: A research agenda on AI in team collaboration. Information & Management, 57(2). https://doi. org/10.1016/j.im.2019.103174. Selber, S. (2004). Multiliteracies for a digital age. Carbondale: Southern Illinois University Press.

46

2 Collaborative Writing Futures

Stokols, D. (2013, January 11). Methods and tools for strategic team science [Presentation]. Washington, DC: Planning Meeting on Interdisciplinary Science Teams. http://tvworldwide.com/eve nts/nas/130111/. Stordy, P. (2013). Taxonomy of literacies. Journal of Documentation, 71(3), 456–476. Thralls, C. (1992). Bakhtin, collaborative partners, and published discourse: A collaborative view of composing. In J. Forman (Ed.), New visions of collaborative writing (pp. 63–82). Heinemann. Traxler, J. (2018). Digital literacy. Research in Learning Technology, 26, 1–21. Tsvetkova, M., Yasseri, T., Meyer, E. T., Pickering, J. B., Engen, V., Walland, P., Luders, M., Folstad, A., & Bravos, G. (2017). Understanding human-machine networks: A Cross-disciplinary survey. ACM Computing Surveys, 50(1). https://dl.acm.org/doi/10.1145/3039868. Turnley, M. (2007). Integrating critical approaches to technology and service-learning projects. Technical Communication Quarterly, 16(1), 103–123. Volini, E., Denny, B., & Schwartz, J. (2020, May 15). Superteams: Putting AI in the group. Deloitte Insights. https://www2.deloitte.com/us/en/insights/focus/human-capital-trends/2020/human-aicollaboration.html. Wenger, E. (1998). Communities of practice: Learning, meaning and identity. Cambridge University Press. Wilson, H. J., & Daugherty, P. R. (2018). Collaborative intelligence: Humans and AI are joining forces. Harvard Business Review, 96(4), 114–123.

Intertext—Writing Machines and Rhetoric …

47

Intertext—Writing Machines and Rhetoric by Heidi McKee and James Porter, Miami University The question we raise here is, How do we teach rhetoric—the nuanced understanding and practice of effective communication—to machines, so that they can write well to their audiences, purpose, and situation? To explore this question we begin with some history, because many of our current understandings of machine writing are shaped by Enlightenment biases—and particularly, mechanical views of communication— passed down through centuries of scientific and technological innovation as well as misconception. Writing as Mechanical Invention In 1666 Gottfried Wilhelm Leibniz wrote a dissertation (De Arte Combinatoria) that created the model for an automated knowledge machine that would be programmed based on, as he saw it, the system governing rational thought. He got this idea from Ramon Llull, a Thirteenth-century Majorcan mystic who was using Jewish Kabbalists’ combinatorics to “generate texts that supposedly revealed prophetic wisdom” (Schwartz, 2019). Llull’s was a paper machine, but Leibniz had the idea to make it mechanical and apply it to philosophical reasoning. Leibniz believed “that all human thoughts, no matter how complex, are combinations of basic and fundamental concepts, in much the same way that sentences are combinations of words, and words combinations of letters. He believed that if he could find a way to symbolically represent these fundamental concepts and develop a method by which to combine them logically, then he [and his machine] would be able to generate new thoughts on demand” (Schwartz, 2019). This theory—called combinatorics—is a formalist theory of thinking, reasoning, and rhetoric that we see prominent in some AI-based writing systems such as GPT-3. In the Eighteenth century, Friedrich von Knauss, a German watchmaker and inventor, created several writing machines (History-Computer, n.d.; Lindsay, 1997), including, in 1760, an automaton capable of writing up to 68 stored letters, using a quill pen as its inscription stylus (Moritsch n.d.). It could “write” only in the sense of inscribe whatever the human operator directed. In 1764 von Knauss created a “writing hand” that dipped a quill pen into an ink bottle and wrote a single Latin prayer to God—but that had an even more limited writing range (Museo Galileo, n.d.). These writing machines were not autonomous or smart in any sense, of course, but they were early experiments in the history of writing machines, in a sense an early effort at robotic writing. Tellingly, these writing machine models arose in the Enlightenment era of Western intellectual thought and invention—an era that was generally dismissive of rhetoric, except when it was downright hostile to it. The Enlightenment philosophers, mathematicians, and scientists mostly viewed rhetoric as antithetical to science, as promoting bombastic, metaphoric, inflated language instead of clear, descriptive, perspicuous scientific language (Porter, 2020). Worse yet, rhetoric promoted persuasion of the audience through the emotions rather than the use of facts and reason

48

2 Collaborative Writing Futures

to inform and enlighten. So they dreamed of a writing machine that could combine logical, scientific thinking with clear, accurate denotative prose—a machine, based on the theory of combinatorics, that could represent rational thinking in a clear and coherent text—and thus avoid the vagaries, complexities, errors, emotions, uncertainties, and messiness of human thinking and communication. Many of the philosophers and scientists of this era had an instrumental and formal view of communication: thinking originates in the individual writer/thinker, who then inscribes that meaning in a written text by converting concepts into words, and then transmits the thinking-represented-in-words to the reader/audience. In this decidedly arhetorical view, meaning resides solely in the language of the text: language is the medium, the instrument, through which the author’s thoughts are transferred directly to the audience, who contribute little if anything to the process. A trickle-down theory of communication. This instrumentalist view stands in opposition to rhetorical models of writing and meaning-making, known by different names as interaction model, conversational model, and social epistemic model. However named, this alternate model differs in a key respect: knowledge/meaning does not arise ex ante non from the individual philosopher writer; it is socially constructed through rhetorical interaction, over time, with an audience and in conversation with communities, disciplines, societies, and cultures shaping the contexts of communication. In this model the author is not presumed to be the sole originator of meaning; the author is a collaborative medium through whom ideas—from the audience, from the culture, from the community— percolate and coalesce. Writing as Rhetorical Invention Which brings us to the large question, Rhetoric, what is it good for? Rhetoric recognizes the complex and messy process, products, and contexts of communication. Given that, so far, machines haven’t been very good with “messy,” does rhetoric have any use or application to the design and development of smart writing machines? Of course we think it does. But what we see in writeups and descriptions of AI-based writing systems is that, with few exceptions, rhetoric seems to be largely missing from the discussion—as it was for Leibniz. Instead, the focus is on writing or language or conversation or text, or on even more atomistic elements like words, sentences, and paragraphs, concepts and units that are key components of rhetoric, but that are not the entirety of what rhetoric is or does and that may, unintentionally, lead us astray from what rhetoric contributes. Some of the machine writing systems that we have looked at, particularly GPT-2 and now GPT-3, work arhetorically, focusing on the text and textual production, assuming that meaning lies in the produced text. They are basically an updated version of combinatorics. The textual product stands in for the process, and when that happens rhetoric is excluded, to the detriment of effective communication. What might rhetoric contribute—or more precisely, what might a social view of rhetoric contribute? Most importantly it turns its attention to rhetorical context and to audience, community, culture, and ethics, asking questions such as Why am I communicating? To whom and in what situations? What prior knowledge and needs

Intertext—Writing Machines and Rhetoric …

49

do my audience have and how should I adapt the message for their knowledge and needs? What are the cultural factors—of the audience, of the current social climate, kairos—that shape not only how I express my thought but the very content of what I say? What are the ethical implications of what I am saying? And in particular, For whose benefit, value, good am I writing? Cui bono? With some machine writing systems these questions are not even considered or are minimized in favor of textual coherence and linguistic, syntactical correctness. Machine writing systems such as GPT-2 and GPT-3 create new text based on the text provided, textual production building off a database of existing text, but without collecting knowledge of the rhetorical situation—the scene or setting for writing, which includes the purpose, exigence, and audience, the very reason for writing in the first place (McKee & Porter, 2020), and without considering questions of ethics or applying practical reasoning (Marcus & Davis 2020), key qualities of a nuanced and effective rhetoric. Such machine writing designs see writing as transmission of thought-via-words, rather than as interaction with others. Several critiques of GPT-3 have noted the system is missing something important—whether practical reasoning (Marcus & Davis 2020), semantic and ethical awareness (Floridi & Chiriatti, 2020), or rhetorical intelligence. As Floridi and Chiriatti (2020) put it, “GPT-3 is an extraordinary piece of technology, but as intelligent, conscious, smart, aware, perceptive, insightful, sensitive and sensible (etc.) as an old typewriter.” The word writing may be part of the problem here. It is an ambiguous word that leads us down the path of oversimplification (as it did for Friedrich Knauss). Writing is both a noun and a verb form—it refers to the written product (the text itself) and to the composing process (which is cognitively complex), but it also refers to the act of inscription, i.e., simply recording letters and words. Writing, as learned in schools for centuries and as understood by von Knauss, refers to handwritten inscription, not rhetoric. Thus, as a term, writing is ripe for oversimplified understandings, and perhaps even encourages them. Rhetoric and Writing Machine Design Rhetoric in the broad sense as human interaction isn’t merely the inscribed text, but also the entire scene or setting in which that text percolates, including people (who have various identities, knowledges, needs, priorities), contexts, histories, biases, and ethical implications swirling around language usage. Rhetoric tries to deal with the text and with all that stuff swirling around outside the text—what we call context— that is critical for effective communication. So back to our question: How do we teach machines rhetoric? Some AI writing systems account for rhetoric by situating the machine in very particular contexts. For example, Narrative Science’s writing app Quill produces quarterly earnings reports or, in its GameChanger version, translates the box score of a baseball game and produces a sports news story. With this app, genre conventions allow humans to lay down very clear tracks for the machine writer to follow. The genre frame provides an anchor that eliminates some (not all) of the messiness and uncertainty of the rhetorical situation.

50

2 Collaborative Writing Futures

Another approach, incredibly labor intensive, is to provide constant human feedback to the machine on good and bad writing decisions. For example, when x.ai created Amy Ingram, the scheduling assistant bot who interacts through email, they used teams of more than 60 people to review Amy’s messages, to handle complex or ambiguous communications, and to flag and correct errors (McKee & Porter, 2017). Amy couldn’t find the mistakes herself, but human intervention helped Amy refine her rhetoric so mistakes became less frequent. And for a lot of machine systems that’s where we may be for quite a while—close human oversight of process and product. But can writing machines learn rhetoric on their own? It certainly seems that pathways for teaching rhetoric are opening up with the rise of big data and the everfaster processing speeds and with developments in machines using neural networks. For example, the ad copywriting AI system Persado produces text that is gaining more clicks than human ad copywriters because Persado is able to more quickly and efficiently draw consumer insights from big data analytics. In a sense, Persado is following an old tried-and-true rhetorical method—audience analysis—but doing it with much more data at hand, to learn more quickly about its audiences (e.g., to learn that consumers of a certain gender, age, race, and ethnicity on this platform click more on X word than Y word). In that sense Persado is at least getting “outside the text” to make more decisions about writing, albeit it in a limited way. With the rise of machine learning and the computer-processing of data, some nonwriting machines are beginning to work rhetorically on their own, moving beyond initial programming to fill in the missing information and adapt communications and actions accordingly. Pluribus, Carnegie Mellon’s poker-playing AI, did a remarkable thing in 2019: winning at no-limit Texas Hold’em against not just one player (as Libratus, an earlier AI, did), but against five other human players, all experts who had each won at least $1 million professionally. By working in real time with multiple data inputs and with imperfect information, Pluribus made inferences and chose actions that, in a zero-sum game, were successful. Pluribus’s programmers did not teach it poker strategy, but rather gave it the rudimentary rules of poker and had it play trillions of hands, learning from each situation and creating a “blueprint strategy.” As Noam Brown and Tuomas Sandholm (2019), Pluribus’s creators, explained, “Then during actual play against opponents, Pluribus improves upon the blueprint strategy by searching for a better strategy in real time for situations in which it finds itself” (p. 886). In a sense, then, Pluribus was learning rhetoric—the rhetoric of poker— through its interactions with other players. Conclusion Of course human communication is not a zero-sum game like poker; it is a realm of imperfect, incomplete information and inferences. Throughout our lives, we humans are immersed in written and spoken language, and from every new rhetorical situation we are learning about language and about the rhetorical factors, both evident and inferred, that shape our interactions with others. Like Pluribus, we learn rhetoric through these interactions. But we do not learn, or learn well, through interactions alone. Tay, Microsoft’s Twitter bot launched in 2016, posted over 96,000 times, and she was a spectacular

Intertext—Writing Machines and Rhetoric …

51

failure: her interactions with others taught her to be a hateful racist troll. Although she was programmed with grammatical and syntactic competence, she did not possess practical reasoning, basic ethics, or the rhetorical competence necessary to filter her interactions. Just as Pluribus has the rules of poker, human writers and machine writers need guiderails, principles to guide our interactions and to help us make decisions about what is best to say. That’s where rhetoric comes in. Rhetoric provides the overarching principles for communication interaction—rules and principles that include grammatical and syntactic guidelines but that go well beyond them. For example, we teach elementary principles of rhetorical ethics to children, principles of audience respect such as “Be polite to others,” “Say ‘please’ and ‘thank you,’” and “Don’t call people rude names.” Tay did not possess even this very basic level of communication ethics. With the AI-based writing systems out there right now we are still working either with mechanistic models of language production or with models that require a huge amount of human intervention. Designers of AI-based writing systems need a language model that is tied to interactive meaning-making, rather than a purely structural model, and that requires a more expansive model that looks at the richness and messiness of context and that provides guidance in the form of principles of reasoning and ethics. Our goal for writing machine design should be to provide immersive self-learning opportunities for machines—the millions of interactions— but those interactions also need to be guided by something more than structural rules. We think rhetoric—an expansive social rhetoric—provides exactly that kind of model.

References Brown, N., & Sandholm, T. (2019). Superhuman AI for multiplayer poker. Science, 365, 885–890. Floridi, L., & Chiriatti, M. (2020). GPT-3: Its nature, scope, limits, and consequences. Minds and Machines. https://doi.org/10.1007/s11023-020-09548-1. History-Computer. (n.d.). Friedrich von Knauss. https://history-computer.com/Dreamers/Knauss. html. Lindsay, D. (1997). Talking head. Invention & Technology, 13(1). https://www.inventionandtech. com/content/talking-head-1. Marcus, G., & Davis, E. (2020, August 22). GPT-3, Bloviator: OpenAi’s language generator has no idea what it’s talking about. MIT Technology Review. https://www.technologyreview.com/ 2020/08/22/1007539/gpt3-openai-language-generator-artificial-intelligence-ai-opinion/. McKee, H. A., & Porter, J. E. (2017). Professional communication and network interaction: A rhetorical and ethical approach. Routledge. McKee, H. A., & Porter, J. E. (2020). Ethics for AI writing: The importance of rhetorical context. In Proceedings of 2020 AAAI/ACM Conference on AI, Ethics, and Society (AIES’20) (pp. 110–116). https://doi.org/10.1145/3375627.3375811. Moritsch, O. (n.d.). All writing miraculous machine: Writing apparatus. Technisches Museum Wien. https://www.technischesmuseum.at/object/all-writing-miraculous-machine. Museo Galileo. (n.d.). La mano che scrive. https://catalogo.museogalileo.it/oggetto/ManoCheSc rive.html.

52

2 Collaborative Writing Futures

Porter, J. (2020). Recovering a good rhetoric: Rhetoric as techne and praxis. In J. Duffy & L. Agnew (Eds.), Rewriting Plato’s legacy: Ethics, rhetoric, and writing studies (pp. 15–36). Utah State University Press. Schwartz, O. (2019, November 4). In the 17th century, Leibniz dreamed of a machine that could calculate ideas. IEEE Spectrum. https://spectrum.ieee.org/tech-talk/artificial-intelligence/mac hine-learning/in-the-17th-century-leibniz-dreamed-of-a-machine-that-could-calculate-ideas.

Chapter 3

Algorithmic Writing Futures

3.1 How Will Algorithms and AI Inform Writing? On July 29, 2020, the chief executives of Amazon, Apple, Google, and Facebook, “worth nearly $5 trillion combined” (Kang & McCabe, 2020), faced the US House Judiciary Committee’s antitrust subcommittee, chaired by representative David Cicilline. His opening statement was “Our founders would not bow before a king. Nor should we bow before the emperors of the online economy” (O’Mara, 2020). There was a common denominator: algorithms. Google’s CEO was questioned about using algorithms to steer web traffic to its own products and pages. Facebook was accused of political manipulation on its social media platforms. Apple defended algorithms used at the App Store to promote or demote access for certain external developers. Amazon answered questions on the issue of using algorithms to categorize third-party “seller-specific data” to benefit its own products, leading to the claim that Amazon “controls 75 percent of all online marketplace sales” (Lohr, 2020). Cicilline’s metaphor of the digital empire led by these “emperors” is fitting. The day signified an underlying theme, that “this cohort of tech companies—which wield immense control over commerce, communications and public discourse—had become the new trusts of the internet age” (Kang & McCabe, 2020). Algorithms reconstitute socio-technical and cognitive assemblages in material, discursive, and ultimately transformative terms, altering the lived reality of people’s lives (Table 3.1). To explore algorithmic control and algorithmic culture, we encourage readers to browse the Writing Futures collection in Fabric under the keyword algorithms as a starting point. The breadth of concepts related to the phenomena is significant. To explore changing communication practices specifically, view the collection’s items on algorithms and communication. We also encourage viewing a smaller collection in Fabric called Workplace Sociality and Well-being (2017–2018) by curator Suneel Jethani. It demonstrates how sensors and data gathering are designed to augment and support behavioral change in the workplace through algorithms. It queries “worker agency and dissonance in the age of worker tracking and metrification” (Jethani, © The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 A. H. Duin and I. Pedersen, Writing Futures: Collaborative, Algorithmic, Autonomous, Studies in Computational Intelligence 969, https://doi.org/10.1007/978-3-030-70928-0_3

53

54

3 Algorithmic Writing Futures

Table 3.1 Writing Futures framework, algorithmic writing futures Algorithmic writing futures

Social

Literacy

Civic engagement

How will algorithms and AI inform writing? Ambient intelligence; Platform studies; Demographics; Algorithmic AI; Machine learning; Virtual assistants

What AI literacies should we cultivate for algorithmic writing futures? Academic analytics; Learning management systems; AI literacy

How might AI help to recognize, ameliorate, and address global civic challenges? Harvard’s Principled Artificial Intelligence project as a heuristic; Writing for ethically aligned design

2017–2018) in creative ways. It addresses practices surrounding algorithms that on the one hand help people with work in innovative new ways, while on the other hand might include managerial control in problematic ways. Every computer program consists of dozens if not millions of algorithms to perform calculations and organize data. Pinning down trajectories for writing futures is a challenge in this context. Hayles (2017) writes of the dynamic relationship between humans and nonhumans that recognizes profound change: “Each technical object has a set of design specifications determining how it will behave. When objects join in networks and interact/intraact with human partners, the potential for surprises and unexpected results increases exponentially” (p. 160). De Visser et al. (2018) also write about unpredictable results that these systems instigate, that “autonomy will surprise human partners to an even greater extent than simple automated systems” (p. 1410). How do we navigate writing futures in an era of massive algorithmic control that has completely reconstituted “communications and public discourse” (Kang & McCabe, 2020)? If algorithms are the active, ever-present grounds for all of our activities, how do we define our interaction with nonhuman agents in responsible ways? How will we presuppose an unpredictable landscape for future writing contexts?

3.1.1 Understanding Algorithms The word algorithm is a loaded term. As Thompson (2019) puts it, “there’s a sort of priestly class mystery cultivated around the word algorithm, but all they consist of are instructions. Do this, then do this, then do this” (p. 10). Simply put, “an algorithm is a sequence of instructions telling a computer what to do” (Domingos, 2015, p. 1). Specifically, in math or computer science, an algorithm is “a finite series of well-defined, computer-implementable instructions to solve a specific set of computable problems” (MathVault). Algorithms are “encoded procedures for transforming input data into a desired output, based on specific calculations” (Gillespie, 2014, p. 167). Therefore, algorithms are meant to have meaningful consequences. As Roberge and Melançon (2017) write, “Data are not merely created and accumulated

3.1 How Will Algorithms and AI Inform Writing?

55

to then become the remote object of algorithms; rather, algorithms work constantly on data by organizing and manipulating them” (p. 306). Writing professionals, knowledge workers, and those who produce content in the creative industries have to navigate a complex space whereby algorithms dictate so many decision-making processes. Algorithmic culture complicates digital life as it paradoxically aims to simplify it. Most people encounter algorithms through platforms that mediate ubiquitous everyday activities like building social media pages, searching for information, connecting with friends and family, or buying things online. To a great extent, algorithms make life easier. But we must acknowledge algorithmic control that constantly modifies how people consume goods and services, compose online identities, work, learn, and organize lives. Ethical critiques of big data and predictive analytics have revealed manipulation (O’Neil, 2016). Willson (2020) summarizes: “Numerous claims as to algorithmic function and consequences abound. Algorithms are claimed to shape the way in which we conduct our friendships (Bucher, 2013), shape our identities (Cheney-Lippold, 2011) and navigate our lives (Beer, 2009). They are claimed as being culture (Seaver, 2017) and as shaping culture (Striphas, 2015); to govern our decision making and regulating practices (Yeung, 2017)” (p. 252). We also must acknowledge that algorithms appear dynamic because the base of data underlying the algorithm is forever changing; as Willson notes, “algorithms need to be understood as dynamic and always in a state of becoming or in beta mode and thus the outcomes with which they have been delegated are also constantly shifting and changing” (p. 254). Our delegation of tasks to technologies results in “both intended and unintended consequences” (p. 255). Algorithms are not only personalized for our usage, they are increasingly personified. Chatbots and virtual assistants are often misidentified as the same technology because they share common traits. Both use anthropomorphized human– computer interfaces that can appear friendly or humanlike. Traditionally, chatbots “are programmed specifically for some selective purposes and they generally perform repetitive tasks. They cannot modulate replies and are never unpredictable in their work” (Misal, 2018). Hence, they are less autonomous and perform more like dynamic FAQs, answering routine questions. Virtual assistants are intended to be (to varying degrees) autonomous, decision-making agents. Misal (2018) distinguishes a virtual assistant through its “tasks like decision making and e-commerce. It can perform activities like sharing jokes, playing music, stock market updates and even controlling the electronic gadgets in the room. Unlike chatbots, virtual assistants mature gradually with use.” However, there is convergence among technologies; AI functional technologies inform the behaviors of chatbots to enable more useful, usable, and humanlike content. AI-enabled conversational chatbots are proposed to become more collaborative in an attempt to humanize user experiences in content design (Ding et al. 2019). We concentrate on virtual assistants in Chap. 4 to explore the future of autonomous agents and the emergence of natural language processing (NLP) in writing spheres. Algorithms also change the way we interact with digital environments and the reasons for those interactions. Pedersen (2020) speculates that algorithmic control will become much more embodied in the future. People will no longer acknowledge

56

3 Algorithmic Writing Futures

these digital activities as units or events, and the perceived future trajectory is that even greater immersion will occur. Our fitness trackers, smartwatches, phones, and tablets help to constitute our digital selves now. They move with us, monitor us, and stay in constant communication. Pedersen argues that the “drive for seamless, constant connection [with agents] makes the assumption that the body will [eventually] be the interface.” This claim is premised on advancements and coordination of algorithmic systems and changing motives for seamlessness. She explains, “With the coming era of body networks, the proposition is to progress seamlessness one more step, whereby the body is the channel (Astrin et al. 2009). The idea is to make bodily monitoring (like that of skin tech) and data transfer more active and direct” (p. 31). Working in concert with this speculation is the concept of ambient interfaces that will bring the body into a complete ecosystem of “topographical [wearable devices], visceral [implanted sensors], and ambient relationships with the body” (Pedersen & Iliadis, 2020, p. xi). Ambient interfaces assume that subjects will behave as data-blended entities that contribute data to third-party platforms (Pedersen & Iliadis, 2020, p. xi). In this scenario, platforms designed for urban infrastructures, the smart city paradigm, will enfold human subjects in dataspheres more pervasively. One has to prepare for embodied algorithmic control that will evolve to interpret further monitoring of cognitive tasks, affective biofeedback, and physical or locative monitoring than before. Kim’s (2018) study of levels of student engagement deals with an ambient intelligence algorithm for such monitoring. In this case, galvanic skin response sensors were attached between the fingers of students in the class; data was monitored and stored, contributing to the “built-up big data.” The algorithm then evaluated “the psychological states of students by measuring a thermal infrared image” (p. 3847) and then provided this information to the instructor—like traffic lights—“in real time in response to the concentration levels of the students in the class” (p. 3852). Similar to radar imaging, this thermal infrared imaging provides a noninvasive and contactless evaluation of “vital signs” and “emotional states” for student engagement in class. According to Kim, “this algorithm provides the learner’s level of immersion in real time as quantitative indices rather than qualitative ones” (p. 3852). Others point to ambient intelligence as a setting rather than a series of interactions, including Montebello (2019) who describes it as follows: When the target service or product are the environment or the ambient itself that surrounds the user, then an Ambient Intelligent (AmI) setting has been established. The surroundings become sensitive and perceptive to the user’s characteristics, needs, interests and actions to an extent that they accommodate the same user and transform, modify or adjust the environment itself. (p. 11)

Interpreted this way, algorithms lead to an architecture devoted to personalization. Montebello promotes the Ambient Intelligent Classroom for numerous reasons, one being simultaneity: “Sensors within an AmI environment are also capable of monitoring, detecting, and capturing multiple actions and conditions that are simultaneously occurring within the same environment” (p. 37). He explains that “all of the simultaneous events do not go unnoticed or overlooked as the background system is

3.1 How Will Algorithms and AI Inform Writing?

57

continuously ensuring that every action and each occurrence that happens within the class is duly addressed” (p. 38). Ambient intelligent environments are emerging in other sectors including work, home, hospitals, stores, and cultural institutions. As part of algorithmic writing futures, we have to imagine how physical and digital spheres will incorporate us as subjects and citizens. Prompt Consider reviews and experiment with suggestions on the “best” AI writing assistant software as of writing this book. These vary from simple assistance with errors to the generation of essay or social media text. To be considered an AI writing assistant, it must use AI, provide insight or enhance written work, offer relevant resources to inform writing, and correct grammar and other errors (Jessica, 2020). The following are examples from this review: WritingAssistant assesses and enhances writing by flagging errors, offering suggestions based on grammar, and providing insights on the quality of writing Grammarly Business assists communication at work, providing suggestions on word choice, tone, clarity, and grammar. ProWritingAid eliminates common errors, notes inconsistent terminology, and also addresses common grammar and spelling errors. Writer (formerly Qordoba) assists with scaling content for consistency and clarity across marketing, technical documentation, HR and legal policies, etc. Lightkey delivers AI-powered predictive typing and correction in 85 languages. WordAI uses AI to understand and rewrite articles for greater readability.

3.1.2 Platform Studies Although algorithms are used in many applications, most people encounter them through platforms. Platform studies have long situated algorithms within a broader network of socioeconomic phenomena amid the growing “gig economy” (Gillespie, 2010, 2018; Pasquale, 2015, 2017; Zuboff, 2019; Tan et al., 2020; Christin, 2020)— the prevalence of short-term and freelance contracts in place of more permanent positions. The practice of writing in the gig economy changes because the labor market has transformed so significantly. Digital platforms offer writers greater opportunities to participate globally in a delocalized economy. The “delocalised gig economy refers to services that are offered regardless of worker-requester location, as for instance on crowd-work platforms” (Tan et al., 2020, p. 3). Global freelancing platforms like Upwork and Fiverr provide a means for short one-off jobs or writing microtasks. These writing platforms use matching algorithms to link writers with clients. Conten tFly uses a rating system to tier a writer’s access to jobs. The goal is to work one’s way up to a larger pool of client requests. Both clients and writers work through dashboards to connect. There are good reasons for people to write for these platforms,

58

3 Algorithmic Writing Futures

including the promise of steady work. However, labor issues such as low wages and lack of benefits have been discussed in gig economy work (Livni, 2019). At the same time, there are longstanding fundamental ethical issues with the way platforms manage algorithms, and we pointed to some of them at the very start of this chapter. Pasquale provides a critique of proprietary endeavors by large corporations in The Black Box Society: The Secret Algorithms that Control Money and Information (2015). He identifies a key conundrum: How has secrecy become so important to industries ranging from Wall Street to Silicon Valley? What are the social implications of the invisible practices that hide the way people and businesses are labeled and treated? … But first, we must fully understand the problem. The term ‘black box’ is a useful metaphor for doing so, given its own dual meaning. It can refer to a recording device, like the data-monitoring systems in planes, trains, and cars. Or it can mean a system whose workings are mysterious; we can observe its inputs and outputs, but we cannot tell how one becomes the other. We face these two meanings daily. (p. 3)

Later in this chapter, we discuss working to design and situate content in ways that search algorithms will pick up and prioritize. Learning to write and adapting writing practices with AI agents empowers human writers. Eatman (2020), in her recent study of image recognition algorithms as a means to explore the cultural rhetoric of machine vision, adds these critical questions: “How do algorithmic interfaces construct user, machine, and the relationship between the two? What kind of viewer is the artificial neural network learning to be, and who does the user have to become to facilitate that transformation?” (p. 2). She focuses on three applications within Google’s AI Experiments: Teachable Machine, Quick Draw!, and Emoji Scavenger Hunt, noting the following: Their emphasis on playful human/machine collaboration elides the forms of power that operate in, around, and through the neural network, including the ways in which this learning and viewing process requires the user to adopt a set of cultural norms. In order to ‘win’ at these interfaces, the user must operate according to the algorithm’s logic. If she cannot or will not adopt this logic, the user not only loses the game but is also excluded from the larger cultural space that these interfaces cultivate. (p. 2)

Eatman argues that we must attend to how algorithmic procedures suggest “enduring relationships between user and algorithm,” relationships that increasingly accumulate and permeate public life. As more information is made available and content becomes more transparent, the true nature of algorithms and their inner workings remains a mystery through black boxes. The ambiguities bound up with algorithms become a lived phenomenon for everyday people. The opacity of algorithms perseveres. Therefore, we encourage our audiences and students to think critically about the workings of algorithms as they evolve. In the section below on learning analytics, we survey some of the challenges and limitations involving learning management systems, which are platforms for

3.1 How Will Algorithms and AI Inform Writing?

59

students, instructors, and administrators, increasingly run by large corporate third parties. Prompt Google’s AI Experiments continue to evolve. Visit the site, explore the collections, read the blog posts, and launch an experiment. For example, AI+Writing experiments at the time of drafting this chapter include Banter Bot, which lets you create and explore the mind of your character by chatting with them as you write, and Between the Lines, which collaborates with you to generate text between sentences “so that as you write you are taken in unexpected directions.” Note that this particular experiment is powered by machine learning models (TensorFlow, Polymer, GPT-2). As professional and technical communicators and/or engineers, consider AI+Writing directions for documentation, structured authoring, and content management.

3.1.3 Demographics Demographics is also an important consideration for understanding the effects of algorithms because it helps define how learning, writing, and working will evolve for different generations. Data and algorithmic literacy are partly “tied to age” (McStay & Rosner, 2020, p. 6). Two cohorts experiencing algorithmic culture most intensively are post-Millennials or Generation Z, born after 1996, and Generation Alpha, born after 2010. Post-internet generations are socialized to communicate from a very young age through smartphones, tablets, and wearable technologies. Commenting on a Pew Research study, Dimock (2019) writes: The iPhone launched in 2007, when the oldest Gen Zers were 10. By the time they were in their teens, the primary means by which young Americans connected with the web was through mobile devices, WiFi and high-bandwidth cellular service. Social media, constant connectivity and on-demand entertainment and communication are innovations Millennials adapted to as they came of age. For those born after 1996, these are largely assumed.

Biometric sensors are becoming more common for young children, socializing them to biometric sensing and ambient interaction. McStay and Rosner (2020) write, Facial recognition is appearing in toys, televisions and mobile phones, biometric capture is occurring in video games, and there are limited appearances of facial expression reading in children’s products… . However, given the relative ease of eventually connecting a device’s camera to a cloud-based emotional analytics service, combined with cheap, ubiquitous internet connectivity and the general trends of such technologies, we are convinced that it is only a matter of time until they are incorporated into child-directed products. (p. 5)

All of these sensing technologies, and facial recognition in particular, reflect the drive toward establishing interpersonal connections with children. The final comment on incorporating emotional analysis of children through facial expression reading predicts further algorithmic immersion.

60

3 Algorithmic Writing Futures

The act of recognizing a person is a vital human quality. Danaher (2018) writes, “classically, following the work of Alan Turing, human-likeness was the operative standard in definitions of AI. A system could only be held to be intelligent if it could think or act like a human with respect to one or more tasks.” He argues that robot friendships are not only reasonable, they can fulfill important social roles (Danaher, 2019). Facial recognition by robots is deployed using robust algorithms in dynamic environments. Robots tested in classrooms in one study determined that “evaluation of the robot’s performance is based on its ability to detect and identify humans in those settings, its ability to express emotions, and its ability to interact naturally with humans” (Ruiz-del-Solar et al., 2013, p. 24). Chinese company UBTech’s Walker, a “walking” house robot, has extensive physical capabilities, but the next major step is to advance its facial recognition in order to bolster its ability to communicate. Fabric has archived a large collection of items on emotions and robots, with several robot inventions marketed to children. We point to these trends because both generations form a large portion of Internet participants, “one-in-three of all global internet users today is below the age of sixteen” (Berman & Albright, 2017; see also Livingstone et al., 2015). The resources that they will draw upon to communicate, establish empathy and collaborate, will be far more embodied, or biometrically driven, than before.

3.1.4 Algorithmic AI Often algorithms and algorithmic systems are deemed synonymous with machine learning algorithms that are significantly more advanced. Machine learning is an AI process “that uses algorithms and statistical models to allow computers to make decisions without having to explicitly program it to perform the task” (WIPO, 2019, p. 146). As discussed further in Chap. 4, the intent is to automate decision-making for humans or nonhuman actors. Machine learning algorithms “build a model on sample data used as training data in order to identify and extract patterns from data, and therefore acquire their own knowledge” (WIPO, 2019, p. 306). Following Bucher (2018), Willson (2020) notes that they are not singular, they operate in combination with other algorithms and models in networked assemblages: “Algorithms are frequently dynamic and iterative, being refined and modified by programmers, with machine learning and artificial intelligence (AI) capacities increasingly being incorporated” (p. 353). As a result, artificial intelligence research increased over 300% from 1998 to 2018, and AI algorithms are becoming faster and cheaper to train (Vincent, 2019). To add further nuance, Sect. 5.3.2 in Chap. 5 draws on overviews from Powers and Cardello (2018) on machine learning methods to address specific user experience problems. According to Greengard (2019), its many components include machine learning (predictive analytics and deep learning), speech (text to speech, speech to text), vision (image recognition, machine vision), language processing (classification, translation, data extraction), expert systems, planning and organization, and robotics.

3.1 How Will Algorithms and AI Inform Writing?

61

In this early phase of AI deployments, it has been made widely evident that machine learning algorithms can be biased and can cause biased professional practices. Algorithmic bias is “the systemic under- or over-prediction of probabilities for a specific population [that] creeps into AI systems in a myriad of ways” (Fjeld & Nagy, 2020, p. 47). The need for transparency for proprietary “black box” decisionmaking systems has been identified by scholars, governments, and citizen advocates as necessary. Search engine algorithms reinforce racist and sexist stereotypes (Noble, 2018). The TPC community will often be on the front line tasked with recognizing, reporting, and/or ameliorating bias in teams with developers, AI nonhuman actors, and an array of public or private entities that are starting to classify these types of issues. This will form another analytical skill in the collaborative relationship between writers, adjacent professionals, and AI actors. Willson (2020) suggests that “some of the anxieties and desires gathering around algorithms [and AI] concern questions of agency and control, linked closely with whether the locus of an algorithm’s agency is perceived as human or within the technology itself” (p. 252). We agree that “a more productive approach to questions of agency in this context may be found by thinking through the delegation of our ways of being in the world to algorithms, how they manifest and what this means for our relations with one another (including our machines) in the everyday” (p. 253). With the focus of this book, part of the “everyday” involves considering literacies associated with understanding how we delegate ways of teaching and learning to algorithms, how such delegation manifests itself, and what this means as we plan and prepare for writing futures. Prompt Augmented reality (AR) integrates digital information with a person’s environment in real time. Computer intelligence—i.e., artificial intelligence—is essential for AR to operate; it allows objects to be labeled and identified from the user’s point of view. Scholarship on AR from fields including human–computer interaction, human factors, usability, and AR software design has largely concentrated on software and hardware usability rather than on content design strategy (Duin et al., 2019) or, in the case of this chapter, algorithmic writing futures. Fabric has monitored AR’s evolution over years across numerous hardware devices (e.g., smartphones, tablets, eye displays, holographic 3D head-mounted displays). It has over 700 artifacts on AR, including a recent collection focused on human-centered design for augmented reality. Each of us has experimented with use of Google Glass (Duin et al., 2016) and Focals smart glasses, beginning with high hopes that ended in disappointment. We recognize that AR and AI “work hand in hand to enable the glasses to flourish and maximize their performance” (Sharma & Bathla, 2019, p. 149) and that their efficacy will improve based on deep learning: “Companies are gathering information every day, since deep learning takes a massive amount of information and as we gather more information, the algorithms need to be modified constantly… . Until they gather data [the] size of hundreds or even thousands of petabytes, AR will keep evolving before reaching its peak” (Sharma & Bathla, 2019, p. 150).

62

3 Algorithmic Writing Futures

Focusing on user experience, Evans and Koepfler (2017) define augmented reality as “a human experience with the world that is enhanced by technology … [in which] AR should be less about technology doing things to people and more about people engaging with the world around them, and having that world enhanced by technology where and when appropriate” (n.p.). In terms of writing enhancement, Augmedix uses AI and remote scribes to improve clinician documentation, claiming to save 2 to 3 h of documentation time per day for its current 15 health system customers. Users are outfitted with either a smartphone or Google Glass device which transmits patient– clinician interactions to remote medical scribes who complete the documentation in real time (Zhou, 2020). AR will continue to emerge as AR apps become more frequently used as interactive writing prompts in classrooms (Donally, 2020; Lourence, 2020; Wang, 2017); as tools for creating concepts, storyboards, and in-game dialogue (Preis for Augmental, 2020); and to address business solutions (Rai, 2019).

3.2 What AI Literacies Should We Cultivate for Algorithmic Writing Futures? Jordan Harrod (TedX Talks, 2020), in her TEDxBeaconStreet talk, defines AI literacy as “a person’s ability to confidently understand and interact with AI-based systems.” Cultivating AI literacy for Harrod is less about understanding the code and more about understanding how popular algorithms affect our daily lives, knowing the difference between machine learning and AI, and determining what is hype versus reality. She emphasizes the importance of considering how the algorithms developed will impact the communities within which they are used. AI Literacy is a general keyword in Fabric that is cross-referenced with an extensive list of related themes, which signifies its reach. AI modifies so many everyday practices that AI literacy needs to be positioned as a life skill. Due to her call to recognize AI literacy as a fundamental ability, we include Harrod’s video in the Race, Algorithmic Bias, and Artificial Intelligence: Expert Talks by Researchers and Artists collection in Fabric to mark its importance. One of the earlier tools in writing studies for cultivating algorithmic literacy is a corpus tool for linguistic analysis, DocuScope. We share this work given its early timestamp, as Kaufer and Ishizaki began the DocuScope project in 1998 at Carnegie Mellon University. This computer-aided rhetorical analysis tool consists of string-matcher software that contains a library of 49.1 million linguistic strings of English classified into 198 language action types organized into 33 dimensions. As the inventors note, “DocuScope is a corpus tool, in the family of applications of a search function, a concordance program, a dictionary pattern matcher, or a mathematically complex machine learning algorithm” (Kaufer & Ishizaki, 2012, n.p.).

3.2 What AI Literacies Should We Cultivate …

63

When a text is analyzed using DocuScope, its pattern matcher codes linguistic patterns across a corpus of interest, categorizing the text on the basis of patterns found in it. DocuScope can also provide a visualization of the text to indicate how similar it is to a prototypical text in a given genre, indicating the linguistic forms and categories that are present or absent as compared to the prototype. Analysts can customize the DocuScope dictionary by using only subsets of it or by creating a completely new dictionary, thus allowing researchers to explore and create new domain-specific dictionaries. Today, “the entire DocuScope library covers 70% of most English texts and has been used to supplement methods in discourse analysis, rhetorical criticism, and author attribution” (Taguchi et al., 2017, p. 361). As one implementation example, Taguchi et al. used DocuScope to analyze two corpora of peer commentary compiled in two English writing classes, one consisting of graduate students (expert group) at a private university in the US, and the other consisting of freshmen (novice group) at a university in Hong Kong. Class instructors provided no specific directions for peer review comments; the US class of 11 students yielded 21,485 words, and the Hong Kong class of 13 yielded 29,900 words. In both cases, peer comments were collected using Classroom Salon, a social networking application developed by Carnegie Mellon University. Using DocuScope, they found a significant difference between these two corpora of peer review comments, with the expert group being significantly higher on dimensions of character, personal register, and personal relations, and the novice group comments being more oriented toward the public and academic registers. Using DocuScope, researchers could identify a range of linguistic dimensions that were significantly different between the two corpora, indicating how peer review was practiced differently in these two contexts. Using DocuScope, students can better understand the range of linguistic dimensions that influence writing across multiple genres. Using a corpus analysis tool, students gain literacy in terms of understanding the underlying dimensions of language. These dimensions are largely examined after a text is created and as a means to inform future writing. In contrast, how might we cultivate critical understanding of algorithms as an audience? In “Writing for Robots,” Killoran (2010) examines search engine optimization (SEO)—the well-known practice for increasing the quantity and quality of traffic to a website—for technical communication business websites. Killoran’s surveys and interviews with 240 business leaders, along with analyses of their sites, indicate how businesses orient their sites both to a human audience of potential clients and to an audience of search engines. Through basic SEO study, students can better understand current audiences visiting their sites, and then work to adjust sites to meet client and customer needs. More closely related to algorithmic writing futures, Gallagher (2017) examines how algorithms become part of a writer’s audience during actual writing and producing of web content. In this case, cultivating literacy for algorithmic writing futures includes work to design and situate content in ways that algorithms will pick up and prioritize. He emphasizes the importance of teaching students to understand and identify how “nonhuman factors (e.g., changes in code, algorithmic variables, changes in interfaces, and software advances) and human factors (e.g., who writes

64

3 Algorithmic Writing Futures

the code and algorithms, designs interfaces, and decides on software updates and when to implement these updates)” (p. 26) influence writing. Therefore, algorithmic audiences pertain both to the people who will read the content and to the corporate search engines and social media algorithms. Gallagher instructs students in how to consider how readers will find their content and the steps needed to increase its circulation. He follows a three-step process: 1.

2.

3.

Students “identify and investigate the values of an algorithm’s designers, programmers, and architects” (p. 31) by way of locating and reading terms of service agreements, white papers, and guides such as Google’s Search Engine Optimization Start Guide (SEOSG) to design content so that PageRank will place their content higher in search results; Students produce, curate, manage, and monitor metadata for the web texts they create (i.e., they sign up for a program that tracks their website’s analytics) to develop metadata awareness; and Students “imagine how to write for audiences across geographical and geopolitical cultures in the present and future” (p. 33), rewriting content to vary results if the algorithms do not display content effectively.

As part of academic and/or professional work, nearly everyone reading this book is simultaneously a content creator and content user across multiple platforms, and throughout such use, these computer systems compile and analyze our data. Given our positioning of this book for primary use in higher education, a critical literacy is the understanding of learning analytics and learning management systems.

3.2.1 Learning Analytics and Learning Management Systems Big data consists of “extremely large data sets that may be analyzed computationally to reveal patterns, trends, and associations, especially relating to human behavior and interactions” (Oxford Dictionary). Discussion of big data in organizations most often includes business intelligence work to analyze and present actionable information to inform business decisions. In academia, business intelligence most often is used to identify ways for its human resources, finance, and student services to become more efficient, effective, and accountable. CIOs Maas and Gower (2017) stress the importance of positioning academia to secure value from data. Organizations committed to remaining relevant are those that identify baselines and benchmarks, determine trend lines, and commit to pursuing a deep understanding of what matters and what makes a difference. Using data to drive decision-making behavior, organizations identify patterns and take “actionable intelligence” to enhance student success and institutional achievement. Academic and learning analytics are often bundled together with business intelligence. Academic analytics involves the application of business intelligence tools and strategies as a means to guide decision-making processes related to teaching

3.2 What AI Literacies Should We Cultivate …

65

and learning. Well over a decade ago, Campbell et al. (2007) introduced the application of academic analytics to teaching and learning and instructor use of learning management systems, emphasizing that “with the increased concern for accountability, academic analytics has the potential to create actionable intelligence to improve teaching, learning, and student success. Traditionally academic systems— such as course management systems, student response systems, and similar tools— have generated a wide array of data that may relate to student effort and success” (p. 44). From academic analytics, statistical analysis is used to develop predictive models as a means to locate at-risk students and then provide interventions to increase student success. From interviews with 40 leading institutions that have developed analytics applications in support of student success, Norris and Bear (2013) emphasized that “optimizing student success is the ‘killer app’ for analytics in higher education. Intelligent investments in optimizing student success garner wide support and have a strong, justifiable return on investment (ROI). Moreover, improving performance, productivity, and institutional effectiveness are the new gold standards for institutional leadership. Enhanced analytics is critical to both optimizing student success and achieving institutional effectiveness” (p. 5). With increased focus on learning, the language of academic analytics has evolved to reflect student-centered data collection. As distinct from academic analytics, numerous scholars have defined learning analytics as “the measurement, collection, analysis, and reporting of data about learners and their contexts, for purposes of understanding and optimizing learning and the environments in which it occurs” (as reported by Strang, 2016, p. 276). As reported by Duin and Tham (2020), Wilson and colleagues (2017), in their exceptional analysis of the challenges and limitations of learning analytics, emphasized concerns surrounding “unproblematized Big Data discourse” (p. 991). They highlighted four issues: “the inconclusiveness of empirical studies; somewhat simplistic conceptions of learning analytics data and methods as part of some generic species, Big Data; choices about data, algorithms and interpretation; and issues around disciplinary and finer-grained differences in pedagogical and learning approaches” (p. 993). They further reiterated that learning analytics implementation at an institution-wide level stems from “its technical nature and from a need to justify sufficient investment in their development” (p. 991). Citing a broad Australian study, they noted that instructors are largely unaware of the initiatives underway at their own institutions and rarely discuss learning analytics. Wilson and colleagues intended “to raise questions about whether using digital traces as proxies for learning can be expected to provide accurate, reliable metrics for student performance” (p. 992). Learning management systems (LMS) make it possible for colleges and universities to collect, store, and mine data for business intelligence and for academic and learning analytics. With such technology, academic institutions have massive storehouses of data as part of their research, teaching, and engagement efforts; the analytical capabilities made possible with technology are part of the seemingly objective, structural properties of institutions. However, across these massive endeavors, how are administrators treating these analytics? Whose data is it? What knowledge

66

3 Algorithmic Writing Futures

of and access to these data do instructors and students have? Are scholars asking the right questions about privacy and surveillance? And most important for this chapter, given the current state of analytics, what are the implications for algorithmic writing futures? According to Meticulous Research (2020), the LMS market is expected to grow at a rate of 20.5% a year to reach $28.1 billion by 2025. Each term instructors use learning (or course) management systems for uploading readings and assignments, reviewing student participation and discussion, and providing comment and assessment of student work (or any combination of these purposes). Since the 1990s, writing and technical communication instructors have used a number of these LMS systems (see Fig. 3.1 from Hill, 2020), with the Canvas LMS growing at the fastest rate worldwide. Canvas by Instructure is the most widely used LMS provider in the US and Canada and is third to only Google and Microsoft in terms of amount of student data amassed (Menard, 2019). Its integration with Cisco Networking Academy continues its global reach, with increasing focus on markets in Central and South America, Africa, Europe, and Asia. According to its website, Instructure “has connected millions of instructors and learners at more than 4,000 educational institutions and corporations throughout the world” and is “the world’s fastest growing learning platform for K-12 and higher ed” (as reported by Marachi & Quill, 2020). This feat has been made possible through Instructure’s reliance on Amazon Web Services (AWS) for data

Fig. 3.1 LMS system use from 1997 to 2019 (Hill, 2020) (Image permission: Creative Commons Attribution 4.0 International License)

3.2 What AI Literacies Should We Cultivate …

67

storage, scalability, and analytic tools. Marachi and Quill, in their study of longitudinal datafication through learning management systems, note how such partnerships with AWS through a program called EdStart result in a global reach with over 316 educational technology companies around the world: “While promises may be made of compliance with COPPA/FERPA data privacy laws, once data are collected and/or combined across international borders, companies would not necessarily be required to abide by laws from the host country where data were gathered. These cross-border data flows could theoretically be stored into international servers capable of transfer, and/or sale without oversight or scrutiny” (p. 423). The widespread adoption of AWS analytic tools is described as a key feature in promoting “the absence of friction” amid the interoperability of systems. In an essay on “Friction-Free Racism,” Gilliard (2018) refers to platforms that ascribe identity through data: The ability to define one’s self and tell one’s own stories is central to being human and how one relates to others; platforms’ ascribing identity through data undermines both. These code-derived identities in turn complement Silicon Valley’s pursuit of ‘friction-free’ interactions, interfaces, and applications in which a user doesn’t have to talk to people, listen to them, engage with them, or even see them. From this point of view, personal interactions are not vital but inherently messy, and presupposed difference (in terms of race, class, and ethnicity) is held responsible. Platforms then promise to manage the ‘messiness’ of relationships by reducing them to transactions. The apps and interfaces create an environment where interactions can happen without people having to make any effort to understand or know each other. (n.p.)

Marachi and Quill write that “it is unclear what access is given to whom to engage in the analytics described” (p. 423). Duin and Tham (2020), in their longitudinal analysis of the Canvas LMS, note large discrepancies across student, instructor, and administrator access to analytics. At the University of Minnesota, students have access to a Grades page and potential “what if” analyses in terms of the impact of future assignments on overall grades. In addition to grade information, instructors have an overview of student participation that includes summaries of page views, participation, and status of assignment submissions. In stark contrast is administrator access that also includes all student page views, enrollment activity, student competency based on submissions and overall activity in courses, and complete detailed logs of activity. Lynch (2017) details how this LMS data is then further aggregated and potentially used by third-party service providers: Educational institutions maintain online records of class data and of student performance. Third-party service providers retain records of tool usage that include detailed scores and personal profiles along with clickstream data recording students’ tutorial actions, written essays or other problem solutions, and even requests for help. Platform providers … can even integrate these records across tools and link them to external profiles to provide a detailed picture of what students do and how they do it. (p. 250)

And Watters (2020), in her essay on “Building Anti-Surveillance Ed-Tech,” notes that these digital technology companies emphasize that they are handing over decisionmaking to algorithms, insisting that AI prevents abuse and disinformation. She describes current proctoring software as among the worst gatherers of data:

68

3 Algorithmic Writing Futures These tools gather and analyze far more data than just a student’s responses on an exam. They require a student show photo identification to their laptop camera before the test begins. Depending on what kind of ID they use, the software gathers data like name, signature, address, phone number, driver’s license number, passport number, along with any other personal data on the ID. That might include citizenship status, national origin, or military status. The software also gathers physical characteristics or descriptive data including age, race, hair color, height, weight, gender, or gender expression. It then matches that data that to the student’s “biometric faceprint” captured by the laptop camera. Some of these products also capture a student’s keystrokes and keystroke patterns. Some ask for the student to hand over the password to their machine. Some track location data, pinpointing where the student is working. They capture audio and video from the session—the background sounds and scenery from a student’s home. Some ask for a tour of the student’s room to make sure there aren’t “suspicious items” on the walls or nearby. The proctoring software then uses this data to monitor a student’s behavior during the exam and to identify patterns that it infers as cheating—if their eyes stray from the screen too long, for example … the algorithm decides who [is] suspicious, what is suspicious. The proctoring software then uses this data to monitor a student’s behavior during the exam and to identify patterns that it infers as cheating—if their eyes stray from the screen too long, for example … the algorithm decides who [is] suspicious, what is suspicious.

The technical ease of tracking and storing data along with the popularity of big data intelligence and academic and learning analytics clearly is growing faster than programs can respond in terms of identifying and understanding their exact roles in pedagogy. Algorithmic writing futures are simultaneously exhilarating and terrifying. The potential to personalize writing direction clashes with “the creation of a new, highly surveilled environment of competitive individualism” (Marachi & Quill, p. 429), and “the security afforded by objectivized status information is purchased at the cost of intensified status competition” (Mau, 2019, p. 4). Marachi and Quill conclude that “the individual is not free to choose but instead compelled to choose within highly circumscribed circumstances of being a rational autonomous agent” (p. 429). In terms of cultivating literacy, Figaredo (2020) points out the following: The data is not the main element that determines how the algorithms behave. System design and, especially, human decisions about how to combine data sets are fundamental to understanding how an algorithm uses data… . The decisions about which data to obtain or how to combine them do not correspond to the algorithm, but to the people in charge of modelling the information and designing the automatic processes that will later be executed by the algorithm. (n.p.)

As such, it is critical for instructors to “know the rudiments behind the technologies employed … [and] to have the ability to adapt the system to the specific learning practices.” Figaredo emphasizes the importance of understanding machine learning pipelines in educational contexts as well as the value of metadata: “Designing the model, training the model, and testing and tagging the data are all human tasks.” He provides a set of eight dimensions for educational algorithms as a means to develop understanding and incorporate a pedagogical approach into the practice of algorithm design. We include one of his guiding questions for each of these dimensions:

3.2 What AI Literacies Should We Cultivate …

69

• Accountability: Are interested audiences informed about the algorithmic decisionmaking? • Biases: Is there a system for social/automatic monitoring of bias? • Data provenance: Is the data properly tagged? • Explainability: What part of the system can be explained to users and stakeholders? • Fairness: Is there control of users who may be favored over the disadvantaged? • Harmful content: Is there control of false identities? • Pedagogical approach: What is the educational theory behind the algorithmic decision-making scheme? • Privacy: Have privacy gradients been defined? Kitto et al. (2020) pose similar sets of questions as part of a learning analytics study spearheaded by Queensland University of Technology in which they investigate the potential of “Enabling Connected Learning via Open Source Analytics in ‘the wild,’” with the goal of identifying an LMS solution that allows educational innovators to recognize the need for “quality, privacy, ethics and data control” while using multiple platforms and systems to provide both “rich and authentic learning experiences” while also delivering learning analytics that uses “interoperable data that is ethically collected and securely stored” (p. v). Kitto and colleagues specifically address the restriction to “a single institutionally endorsed tool such as the LMS [that] encourages a binary of compliance or non-compliance” (p. v), with concrete guidance on educational data interoperability. A key deliverable from this project is their Connected Learning Analytics (CLA) toolkit, available at https://github.com/kirstykitto/CLAtoolkit (Fig. 3.2 presents a schematic of the toolkit design). Note its design to respect student privacy and data control through scraping; furthermore, students have control over the social media

Fig. 3.2 A schematic of the CLA toolkit design (Image permission: Creative Commons AttributionShareAlike 4.0 International License)

70

3 Algorithmic Writing Futures

accounts they link, and the system “only collects data for specified activities in those social media environments” (p. 8). Most importantly, “this tool provides students with access to their learning data, thereby helping them to explore the data they generate in social media environments and as a consequence, promotes awareness of, and improves data literacy” (p. 8). Ultimately, McKee and Porter (2020) ask, “What do humans need to know and what do machines need to know to write to and for each other—and, importantly, what can’t they know?” They emphasize the importance of rhetorical context for AI writing, noting that even with “seemingly unlimited data points, many AI writing systems are built on an information transfer model of communication that assumes text production is a simple matter of converting raw data into sentences and paragraphs” (p. 110), a model that “obscures the critical role of audience and context and excludes ethics as an element of textual production… . The epistemological assumption of this model is that knowledge and meaning are fully contained in the data corpus, that the knowledge can be encapsulated in a verbal message, and then the message can be delivered to a largely uninformed and/or ignorant audience. Meaning is in the words” (p. 111). In contrast, a social or rhetorical model begins with audience, purpose, and ethical understanding of interactions. “When humans and AI systems interact, miscommunication occurs and ethical issues arise from lack of understanding about [social and rhetorical] context” (p. 113). McKee and Porter offer two ethical principles to guide the design of AI writing systems. We include these here because we see these as literacies critical for preparing and planning for algorithmic writing futures: • An ethic of transparency: humans must know the rhetorical context and whether they are interacting with an AI agent—whether in mobile text, social media, or other communication (p. 113); and • An ethic of critical data awareness: a methodological reflexivity about rhetorical context and omissions in the data that need to be provided by a human agent or accounted for in machine learning (p. 110). In contrast to seeing human–AI writing futures as an age of “human-machine symbiosis requiring collaborative intelligence” (p. 115), McKee and Porter identify the “operative metaphor” for this relationship as being “the centaur—half person, half horse… . The machine/body provides power and speed, the human/head provides direction, purpose, and most importantly, ethical guidance” (p. 115). Instead of delegation and observation, we need collaborative, civic engagement. We pick up these themes and delve further into explainability and transparency for AI in Chap. 4 on autonomous writing futures. Prompt Technological progress and social enthusiasm for analytics and algorithmic writing futures will continue to outpace concerns unless writing and technical communication scholars work to understand the risks we are taking and to be more intentional about algorithmic use. We must cultivate understanding of algorithms and AI, what we delegate to these, and how they manifest in our teaching and learning.

3.2 What AI Literacies Should We Cultivate …

71

As one starting point, Selber (2020) encourages engagement with academic IT contexts as a way to analyze how an institution approaches information technologies and to increase institutional literacies. He discusses six ways to intervene in academic IT work: maintaining awareness, using systems and services, mediating for audiences, participating as user advocates, working as designers, and partnering as researchers. As another starting point, examine how Ipperciel (2020) worked with students at York University in Toronto, Canada, to create an AI-powered virtual assistant. Following design thinking methodology, students envisioned, designed, prototyped, and evaluated an AI-powered virtual assistant. “From its inception, this Student Virtual Assistant (SVA) project was conceived as a student-centered project” (p. 4), an exemplar project in terms of cultivating understanding of algorithms and AI.

3.3 How Might AI Help to Recognize, Ameliorate, and Address Global Civic Challenges? One challenge is that there are so many different interpretations of civic value systems. If professional and technical writers and computer scientists are to serve society by interpreting AI and its impact, they need to understand how “to identify societal values” (Dignum, 2019, p. 62). Following Selber’s (2020) call to “mediate for audiences,” we propose another step to intervene in algorithmic development for civic engagement. Governments, private companies, and civic advocacy groups are all publishing principle documents to inform the emergence of AI, each promoting certain values from their own sector-based points of view. These three stakeholder groups interpret values differently, making it difficult to identify and apply principles appropriately. In a simplified scenario, working toward the principle of privacy when designing algorithms for government may involve legislation and regulations to govern it. In industry, however, companies typically define the limits of privacy through terms of service, the legal agreements that consumers consent to when signing up for digital services like Instagram. Civic advocacy groups may not agree with either governments or industries on how to manage data privacy and its deployment through algorithms. They may be concerned with a specific issue, such as climate change or the plight of migrants, leading them to speak and write with a specific agenda on algorithmic control and data governance. While we write about these three groups as if they are independent, they are not. Global social implications of AI are complex and entangled. Writers within these contexts similarly are met with a myriad of AI implications. Drawing on our framework, civic engagement involves enabling constructive collaboration among stakeholders in order to help overcome division or discrimination. Through the book’s chapters, we draw on documents and cases from all major stakeholder categories to help support our framework. Professional and technical

72

3 Algorithmic Writing Futures

writers will need to understand all three audiences, the rhetorical motives embedded in texts, and the ethical principles that help determine context in order to participate responsibly. To help interpret differing stakeholder voices, the Berkman Klein Center at Harvard University started the Principled Artificial Intelligence Project to map ethical and human rights-based approaches to AI (Fjeld et al., 2020). It identified a consensus problem: that stakeholders do not necessarily agree on how to define and apply abstract principles. They write, “Despite the proliferation of these ‘AI principles,’ there has been little scholarly focus on understanding these efforts either individually or as contextualized within an expanding universe of principles with discernible trends.” The project provides an analytical model to align principles, and to avoid reducing, obfuscating, or erasing stakeholder viewpoints (see the report). The central deliverable is an extensive data visualization to help synthesize 36 principles documents down to eight comprehensive themes: Promotion of Human Values, Professional Responsibility, Human Control of Technology, Fairness and Non-Discrimination, Transparency and Explainability, Safety and Security, Accountability, and Privacy. It defines five actor groupings and charts how each interprets the themes. The actors include Government, Private Sector, and Civil Society, which align with the three stakeholder groups we discussed above, but they also include two important hybrid actor categories: Inter-Governmental Organizations (e.g., G20, OECD) and Multistakeholders, which include people from industry, government, and civil society (e.g., the IEEE Ethically Aligned Design group). One main point of the visualization is to illustrate that actors do not always agree; some emphasize human control of technology, for example, and some ignore it. This kind of specificity provides the interpretive nuance writers need in order to understand that AI values can be viewed as heterogeneous, dynamically changing, and always contextualized. Those who study rhetoric can immediately recognize the visualization as an interpretive rhetorical apparatus to derive motive through categories of audience, purpose, and context. The themes visualized in this manner foreground which actors and their representative texts are consubstantial with each other. The Beijing AI Principles (China) and The Ethics Guidelines for Trustworthy AI (Europe) both cover the principle of Safety and Security to a similar extent, while others seem to emphasize other concerns. The project report explains the methods used to synthesize and assess the coverage of themes. It is also a good source for definitions that takes into account the breadth of the principles documents. In Chap. 4, “Autonomous Writing Futures,” we concentrate on transparency, explainability, fairness, and nondiscrimination by drawing on the Principled Artificial Intelligence Project for guidance. Prompt Read Nvidia’s corporate blog post by professional technology writer Rick Merritt on AI and Smart Cities, “What Is a Smart City? How AI Is Going Uptown Around the Globe” (2020), which provides a snapshot of value systems surrounding emergent smart city technology. Merritt explains, “A smart city is typically a kind of municipal Internet of Things—a network of cameras and sensors that can see, hear and even

3.3 How Might AI Help to Recognize …

73

smell. These sensors, especially video cameras, generate massive amounts of data that can serve many civic purposes like helping traffic flow smoothly.” Laced throughout the story are references to Nvidia products, like the Jetson TX2, described as “the fastest, most power-efficient embedded AI computing device.” The claims about improving cities for citizen betterment are also outlined: “Every city wants to be smart about being a great place to live. So, many embrace broad initiatives for connecting their citizens to the latest 5G and fiber optic networks, expanding digital literacy and services.” And voices from government are brought to the fore: “Seat Pleasant [Maryland] would like to be a voice for small cities in America where 80 percent have less than 10,000 residents,” said [Mayor Eugene] Grant. “Look at these cities as test beds of innovation … living labs.” The article mentions that Seat Pleasant “has also become the first U.S. city to use drones for public safety, including plans for life-saving delivery of emergency medicines.” The challenge is to ask questions about “plans” for drones, future trajectories made up of salient claims, saving lives, and those assumptions are that left out: what are the unintended consequences of using drones? We might ask, “Is Nvidia committed to Professional Responsibility?” The Principled Artificial Intelligence Pro ject interprets Professional Responsibility in five defined subprinciples: “accuracy, responsible design, consideration of long-term effects, multistakeholder collaboration, and scientific integrity” (p. 57). Can Nvidia be held to account to that value before the technology is adopted by a city?

3.3.1 Writing for Ethically Aligned Design, Moving from Principles to Practice As we work to address civic challenges, writers need to understand and be aware of platform bias and AI and human content moderation. We must acknowledge the scale of big data and immense algorithmic control through social media; as Gillespie (2020) warns, “the quantity, velocity, and variety of content is stratospheric; users are linked less by the bonds of community and more by recommendation algorithms and social graphs” (n.p.). Armed with such recognition, a way to address civic challenges is to participate in ethically aligned design for AI as a writer and collaborative team member. Dignum, author of Responsible Artificial Intelligence (2019), proposes a Design for Values approach to writing algorithms for computer scientists. First, she identifies the problem: that “most software development methodologies follow a development life cycle that includes the steps of analysis, design, implementation, evaluation, and maintenance.” Second, she pinpoints the solution: “a responsible approach to the design of AI systems requires the evaluation process to be continuous during the whole development process and not just a step in the development sequence” (pp. 66– 67). Continuous evaluation holds development accountable to values throughout the life cycle of developing the algorithm. Dignum explains:

74

3 Algorithmic Writing Futures During the development of AI systems, taking a Design for Values approach means that the process needs to include activities for (i) the identification of societal values, (ii) deciding on a moral deliberation approach (e.g. through algorithms, user control or regulation), and (iii) linking values to formal system requirements and concrete functionalities. (p. 62)

While some technical writers work directly with developers on system requirements or user documentation and are able to participate in a Design for Values approach alongside those who write algorithms, other professional writers can apply it to broader spheres of professional practice. In this chapter, we have discussed civic engagement and societal values for AI in many different discourses in order to promote recognizing the issues. Going another step, the Harvard Principled Artificial Intelligence project provides an ameliorative heuristic for the “identification of societal values” that Dignum suggests. Continuous evaluation of algorithms is a means to address the problem, and it will need to be interpreted through the contexts that arise. We also encourage the practice of co-design for values-driven methods, which has been promoted by engineers and computer scientists to avoid unethical algorithms. “Co-design refers to any system or technology design effort that engages end-users and other relevant stakeholders in the creative process, in an ‘act of collective creativity’” (Robertson et al., 2019). Technical and professional communicators have long contributed research and methods to co-design, participatory design, and user-centered design to help include audiences, users, and individuals in ethical ways (Spinuzzi, 2005). Co-design is another collaborative strategy to address civic challenges. Another good source is The IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems (2020) and its documents on Ethically Aligned Design. It takes a consensus-building approach, describing itself as “an incubation space for new standards and solutions, certifications and codes of conduct, and consensus building for ethical implementation of intelligent technologies.” In Chap. 4, we discuss techniques to address fairness and nondiscrimination more directly, recommending a Design for Values approach for automated systems. We share the following set of resources to illustrate government and citizen advocacy efforts through use of AI. Many include links for further information and participation. Dutton (2018) provides an overview of national AI strategies, summarizing key policies and goals of each strategy. The first country to release a national AI strategy was Canada; China announced its plan to lead the world in AI theory, technology, and applications; and the EU Commission developed a timeline for AI strategy. The Deloitte Insights report on Government Trends 2020 notes that “one reason AI can work well for government is that it needs volumes of data—and governments have plenty of volume” (p. 9). It includes an excellent visual summary of national AI strategies (p. 10), emphasizing that “applied wisely, AI can be a national asset and a source of global competitive advantage” (p. 11). Other key sections include “The rise of data and AI ethics,” detailing ethical complexity in the age of big data; “Anticipatory government,” describing the use of predictive analytics in government as a means to preempt problems; “Digital citizen,” asking “what if the government could authenticate citizens—and businesses—the same way that most digital companies do their customers?”; and “Nudging for good,” emphasizing the importance of using behavioral science to improve government outcomes. An earlier

3.3 How Might AI Help to Recognize …

75

report, “Artificial intelligence for citizen services and government” by Mehr (2017), explores current and future uses of AI, detailing the types of government problems appropriate for AI: resource allocation, large datasets, shortage of experts, predictable scenarios, procedural needs, and diverse data. (p. 4) Lampell and Liu (2018), publishing in OpenGlobalRights, note how “the future of democracy is entangled with artificial intelligence” and their concern that AI is being used to undermine civic freedom. Nesta, an innovation foundation in the UK, works “in areas where there are big challenges facing society.” One group of researchers, Ozel et al. (2020) publishing in Futurescoping, describe “future use cases that bring together groups of people and AI to address environmental issues at a community level.” Another team of researchers, Berditchevskaia and Boeck (2020), detail an alternative to seeing AI as either savior or villain: “By starting with people first, we can introduce new technologies into our lives in a more deliberate and less disruptive way.” Their report is “aimed at innovators working in public sector and civil society organisations who … want to understand the opportunities for combining machine and collective human intelligence to address social challenges.” Sinders (2020) documents how Amsterdam is developing civic AI to address service requests from citizens, and Kirchner (2020) documents how Dublin is using AI and social media to gain insight into how citizens feel about civic issues. And Hollister (2020), publishing at the World Economic Forum, describes multiple efforts underway in which AI is helping with the COVID-19 crisis.

Prompt In a move to shift AI from its depiction through a dystopian lens, the nonprofit XPRIZE Foundation designs and operates competitions to award millions to create technology for a more sustainable world. The $5 M AI XPRIZE “challenges teams to demonstrate how humans can work with AI to tackle global challenges,” aiming to accelerate AI adoption and “audacious demonstrations of the technology that are truly scalable to solve societal grand challenges” (AI to Solve Global Issues, XPRIZE, 2020). As of August 2020, there were 10 semifinalists from six countries. These teams proposed the following AI-based projects: • • • • • • • • • •

Clinical decision support in mental healthcare, powered by AI Robotics for a greener tomorrow (replacing waste stations with TrashBots) Deep Drug: The power of AI to improve world health Building a frictionless future: An adaptive identity solution for access without compromise Emprise, to serve millions of students in online courses using AI Research discovery with AI MachineGenes, machine-intelligent artificial pancreas Creating solutions for social impact, to identify and combat criminal activity AI-powered precision agriculture AI to eliminate malaria

76

3 Algorithmic Writing Futures

What projects should be added to this list? How might you use the Writing Futures framework as a way to envision and propose an XPRIZE project? To conclude, as part of the Writing Futures framework, we ask readers to attend to algorithms and artificial intelligence to augment, create, and navigate volumes of information. We encourage readers to cultivate ambient intelligence to coordinate the collection of data as machine intelligence complements human agency to contribute to learning. Each of us reads AI-generated text, is informed and directed by algorithms, and has the opportunity to better understand and deploy analytics and AI to recognize, ameliorate, and address civic challenges. How will we participate in ethically aligned design for AI as writers and collaborative team members? Chapter 4 delves into writing and working with autonomous agents, pushing boundaries to speculate about writing futures and future writers.

References AI to solve global issues. (2020). XPRIZE. https://www.xprize.org/prizes/artificial-intelligence. Astrin, A., Li, H.-B., & Kohno, R. (2009). Standardization for body area networks. IEICE Transactions on Communications, E92.B(2), 366–372. Beer, D. (2009). Power through the algorithm? Participatory web cultures and the technological unconscious. New Media & Society, 11(6), 985–1002. Berditchevskaia, A., & Boeck, P. (2020). The future of minds and machines: How artificial intelligence can enhance collective intelligence. Nesta. https://www.nesta.org.uk/report/future-mindsand-machines/. Berman, G., & Albright, K. (2017). Children and the data cycle: Rights and ethics in a big data world. Office of Research—Innocenti Working Paper WP-2017-05. UNICEF Office of Research. Bucher, T. (2013). The friendship assemblage: Investigating programmed sociality on Facebook. Television & New Media, 14(6), 479–493. Bucher, T. (2018). If … Then: Algorithmic power and politics. Oxford University Press. Campbell, J., DeBlois, P., & Oblinger, D. (2007). Academic analytics: A new tool for a new era. Educause Review, 42(4), 40–57. https://net.educause.edu/ir/library/pdf/ERM0742.pdf. Cheney-Lippold, J. (2011). A new algorithmic identity: Soft biopolitics and the modulation of control. Theory Culture & Society, 28(16), 164–181. Christin, A. (2020). The ethnographer and the algorithm: Beyond the black box. Theory and Society. https://doi.org/10.1007/s11186-020-09411-3. Danaher, J. (2018). Toward an ethics of AI assistants: An initial framework. Philosophy and Technology, 31, 629–653. https://doi.org/10.1007/s13347-018-0317-3. Danaher, J. (2019). The philosophical case for robot friendship. Journal of Posthuman Studies, 3(1), 5–24. https://doi.org/10.5325/jpoststud.3.1.0005. de Visser, E. J., Pak, R., & Shaw, T. H. (2018). From ‘automation’ to ‘autonomy’: the importance of trust repair in human–machine interaction. Ergonomics, 61(10), 1409–1427. https://doi.org/ 10.1080/00140139.2018.1457725. Deloitte Center for Government Insights. (2020). Government trends 2020. https://www2.deloitte. com/content/dam/insights/us/articles/government-trends-2020/DI_Government-Trends-2020. pdf. Dignum, V. (2019). Responsible artificial intelligence: How to develop and use AI in a responsible way. Springer.

References

77

Dimock, M. (2019, January 17). Defining generations: Where millennials end and generation Z begins. Pew Research Center. https://www.pewresearch.org/fact-tank/2019/01/17/where-millen nials-end-and-generation-z-begins/. Ding, H., Ranade, N., & Cata, A. (2019, October). Boundary of content ecology: Chatbots, user experience, heuristics, and pedagogy [Conference session]. In 37th ACM International Conference on the Design of Communication (SIGDOC ’19). New York, NY, USA. https://doi.org/10. 1145/3328020.3353931. Domingos, P. (2015). The master algorithm: How the quest for the ultimate learning machine will remake our world. Basic Books. Donally, J. (2020, March 13). Day 13: Narrator AR. https://www.arvrinedu.com/post/day-13-nar rator-ar. Duin, A. H., Armfield, D., & Pedersen, I. (2019). Human-centered content design in augmented reality. In G. Getto, N. Franklin, S. Ruszkiewicz, & J. Labriola (Eds.), Context is everything: Teaching content strategy (pp. 89–116). ATTW Book Series in Technical and Professional Communication. Routledge. Duin, A. H., Moses, J., McGrath, M., & Tham, J. (2016). Wearable computing, wearable composing: New dimensions in composition pedagogy. Computers and Composition Online. http://cconlinej ournal.org/wearable/. Duin, A. H., & Tham, J. (2020). The current state of analytics: Implications for learning management system (LMS) use in writing pedagogy. Computers and Composition, 55. https://www.sciencedi rect.com/science/article/pii/S8755461520300050. Dutton, T. (2018). An overview of national AI strategies. https://medium.com/politics-ai/an-ove rview-of-national-ai-strategies-2a70ec6edfd. Eatman, M. E. (2020). Unsettling vision: Seeing and feeling with machines. Computers and Composition, 57. https://dx.doi.org/10.1016/j.compcom.2020.102580. Evans, K., & Koepfler, J. A. (2017). The UX of AR: Toward a human-centered definition of augmented reality. UX User Experience. http://uxpamagazine.org/the-ux-of-ar/. Figaredo, D. D. (2020). Data-driven educational algorithms pedagogical framing. RIED. Revista Iberoamericana de Educacion a Distancia, 23(2), 65–84. http://revistas.uned.es/index.php/ried/ article/view/26470/21684. Fjeld, J., Achten, N., Hilligoss, H., Nagy, A., & Srikumar, M. (2020). Principled artificial intelligence: Mapping consensus in ethical and rights-based approaches to principles for AI. Berkman Klein Center Research Publication No. 2020-1. https://ssrn.com/abstract=3518482; http://dx.doi. org/10.2139/ssrn.3518482. Fjeld, J., & Nagy, A. (2020, January 15). Principled artificial intelligence: Mapping consensus in ethical and rights-based approaches to principles for AI. Berkman Klein Center for Internet & Society at Harvard University. https://cyber.harvard.edu/publication/2020/principled-ai. Gallagher, J. (2017). Writing for algorithmic audiences. Computers and Composition, 45, 25–35. Gillespie, T. (2010). The politics of ‘platforms’. New Media & Society, 12(3), 347–364. https://doi. org/10.1177/1461444809342738. Gillespie, T. (2014). The relevance of algorithms. In T. Gillespie, P. J. Boczkowski, & K. A. Foot (Eds.), Media technologies: Essays on communication, materiality, and society (pp. 167–193). MIT Press. Gillespie, T. (2018). Custodians of the internet: Platforms, content moderation, and the hidden decisions that shape social media. Yale University Press. Gillespie, T. (2020). Content moderation, AI, and the question of scale. Big Data & Society, 7(2). https://doi.org/10.1177/2053951720943234. Gilliard, C. (2018, October 15). Friction-free racism: Surveillance capitalism turns a profit by making people more comfortable with discrimination. Real Life. https://reallifemag.com/frictionfree-racism/. Greengard, S. (2019, May 24). What is artificial intelligence? Datamation. https://www.datamation. com/artificial-intelligence/what-is-artificial-intelligence.html.

78

3 Algorithmic Writing Futures

Hayles, N. K. (2017). Unthought: The power of the cognitive nonconscious. University of Chicago Press. Hill, P. (2020). State of higher ed LMS market for US and Canada: Year-end 2019 edition. PhilOnEdTech. https://philonedtech.com/state-of-higher-ed-lms-market-for-us-and-can ada-year-end-2019-edition/. Hollister, M. (2020). AI can help with the COVID-19 crisis—But the right human input is key. World Economic Forum. https://www.weforum.org/agenda/2020/03/covid-19-crisis-artificial-int elligence-creativity/. IEEE SA. (2020). The IEEE global initiative on ethics of autonomous and intelligent systems. https://standards.ieee.org/industry-connections/ec/ead-v1.html. Ipperciel, D. (2020). Student centeredness as innovation: The creation of an AI-powered virtual assistant by and for students (p. 28). AI: Ethics and Society. Jessica. (2020, May 20). 12 writing assistant software apps currently using artificial intelligence (AI). http://scribesyndicate.com/12-writing-assistant-software-apps-currently-using-artificial-int elligence-ai/. Jethani, S. (2017–2018). Workplace Sociality and Wellbeing (2017–2018). Fabric of Digital Life. https://fabricofdigitallife.com/index.php/Browse/objects/facet/collection_facet/id/13. Kang, C., & McCabe, D. (2020, July 31). Lawmakers, united in their ire, lash out at big tech’s leaders. New York Times. https://www.nytimes.com/2020/07/29/technology/big-tech-hearing-apple-ama zon-facebook-google.html. Kaufer, D., & Ishizaki, S. (2012). Docuscope. https://evaluatingdigitalscholarship.mla.hcommons. org/evaluation-workshop-2012/docuscope/. Killoran, J. B. (2010). Writing for robots: Search engine optimization of technical communication business web sites. Technical Communication, 57(2), 161–181. Kim, P. W. (2018). Ambient intelligence in a smart classroom for assessing students’ engagement levels. Journal of Ambient Intelligence and Humanized Computing, 10, 3847–3852. Kirchner, L. (2020). Smart Dublin explores how AI and social media can help improve the city region. Dublin Economic Monitor. http://www.dublineconomy.ie/2020/02/06/smart-dublin-exp lores-how-ai-and-social-media-can-help-improve-the-city-region/. Kitto, K., Lupton, M., Bruza, P., Mallett, D., Banks, J., Dawson, S., Gasevic, D., Shum, S. B., Pardo, A., & Siemens, G. (2020). Learning analytics beyond the LMS: Enabling connected learning via open source analytics in “the wild.” Australian Government Department of Education, Skills and Employment. https://ltr.edu.au/resources/ID14-3821_Kitto_Report_2020.pdf. Lampell, Z., & Liu, L. (2018). How can AI amplify civic freedoms? OpenGlobalRights. https:// www.openglobalrights.org/how-can-AI-amplify-civic-freedoms/. Livingstone, S., Carr, J., & Byrne, J. (2015, October). One in three: Internet governance and children’s rights (Global Commission on Internet Governance Paper Series, No. 22). Livni, E. (2019). The gig economy is quietly undermining a century of worker protections. Quartz. https://qz.com/1556194/the-gig-economy-is-quietly-undermining-a-century-of-workerprotections/. Lohr, S. (2020, July 29). Lawmakers from both sides take aim at big tech executives. New York Times. https://www.nytimes.com/live/2020/07/29/technology/tech-ceos-hearing-testim ony#what-ceos-said. Lourence, A. (2020, January 14). How augmented reality can be used for writing prompts. AR Post. https://arpost.co/2020/01/14/how-augmented-reality-can-be-used-for-writing-prompts/. Lynch, C. (2017). Who prophets from big data in education? New insights and new challenges. Theory and Research in Education, 15(3), 249–271. https://doi.org/10.1177/1477878517738448. Maas, B., & Gower, M. (2017). Why effective analytics requires partnerships. EDUCAUSE Review, 52(3). Marachi, R., & Quill, L. (2020). The case of Canvas: Longitudinal datafication through learning management systems. Teaching in Higher Education, 25(4), 418–434. Mau, S. (2019). The setric society—On the quantification of the social. Polity Press. McKee, H. A., & Porter, J. E. (2020, February 7–8). Ethics for AI writing: The importance of rhetorical context [Conference session]. AAAI/ACM Conference on AI, Ethics, and Society (AIES ’20). New York, NY, USA. https://doi.org/10.1145/3375627.3375811.

References

79

McStay, A., & Rosner, G. (2020). Emotional AI and children: Ethics, parents, governance. Emotional AI Lab: Ethics, Society, Culture. https://drive.google.com/file/d/1Iswo39rukxdtL7E84GHMAq1ykiYR-bw/view. Mehr, H. (2017). Artificial intelligence for citizen services and government. Harvard Ash Center for Democratic Governance and Innovation. https://ash.harvard.edu/files/ash/files/artificial_inte lligence_for_citizen_services.pdf. Menard, J. (2019, May 20). EdTech companies with the most student data. ListEdTech. https:// www.listedtech.com/blog/edtech-companies-with-the-most-student-data. Merritt, R. (2020, August 6). What is a smart city? How AI is going uptown around the globe. NVIDIA. https://blogs.nvidia.com/blog/2020/08/06/what-is-a-smart-city/. Meticulous Research. (2020, June 14). Learning management system (LMS) market worth $28.1 billion by 2025. https://www.globenewswire.com/news-release/2020/06/14/2047735/0/en/Lea rning-Management-System-LMS-Market-Worth-28-1-Billion-by-2025-Growing-at-a-CAGRof-20-5-from-2019-Global-Market-Opportunity-Analysis-and-Industry-Forecasts-by-Meticu lous-Researc.html. Misal, D. (2018, September 7). What is the difference between a chatbot and virtual assistant? Analytics India Magazine. https://analyticsindiamag.com/what-is-the-difference-between-a-cha tbot-and-virtual-assistant/. Montebello, M. (2019). The ambient intelligent classroom: Beyond the indispensable educator. Springer. https://link-springer-com.ezp3.lib.umn.edu/book/10.1007%2F978-3-030-21882-9. Noble, S. U. (2018). Algorithms of oppression: How search engines reinforce racism. New York University Press. Norris, D., & Bear, L. L. (2013). Building organizational capacity for analytics. Educause. https:// library.educause.edu/~/media/files/library/2013/2/pub9012-pdf. O’Mara, M. (2020, July 30). The last days of the tech emperors? New York Times. https://www.nyt imes.com/2020/07/30/opinion/sunday/tech-congress-hearings-facebook.html. O’Neil, C. (2016). Weapons of math destruction: How big data increases inequality and threatens democracy. Crown. Ozel, B., Chang, F.-J., Burgess, O., van der Hoog, S., & Yayla, O. (2020, June 18). Civic AI: Responding to the climate crisis. Nesta. https://www.nesta.org.uk/project-updates/civic-ai-cli mate-crisis/. Pasquale, F. (2015). The black box society: The secret algorithms that control money and information. Harvard University Press. Pasquale, F. A. (2017). Two narratives of platform capitalism. Philosophy & Methodology of Economics eJournal, 2017–20. Pedersen, I. (2020). Will the body become a platform? Body networks, datafied bodies, and AI futures. In I. Pedersen & A. Iliadis (Eds.), Embodied computing: Wearables, implantables, embeddables, ingestibles (pp. 21–48). MIT Press. Pedersen, I., & Iliadis, A. (Eds.). (2020). Embodied computing: Wearables, implantables, embeddables, ingestibles. MIT Press. Powers, A., & Cardello, J. (2018, March 6). Applying machine learning to user research: 6 machine learning methods to yield user experience insights. https://medium.com/athenahealth-design/mac hine-learning-for-user-experience-research-347e4855d2a8. Preis, K. (2020). Augmental: Augmented reality, mixed reality, and virtual reality writer services. https://fabricofdigitallife.com/index.php/Detail/objects/4819. Rai, T. (2019, September 3). Augmented reality examples: 10 industries using AR to reshape business. ClickZ. https://www.clickz.com/augmented-reality-examples-10-industries-using-ar-to-res hape-business/214953/. Roberge, J., & Melançon, L. (2017). Being the King Kong of algorithmic culture is a tough job after all: Google’s regimes of justification and the meanings of Glass. Convergence: The International Journal of Research into New Media Technologies, 23(3), 306–324. Robertson, L. J., Abbas, R., Alici, G., Munoz, A., & Michael, K. (2019). Engineering-based design methodology for embedding ethics in autonomous robots. Proceedings of the IEEE, 107(3), 582–599. https://doi.org/10.1109/JPROC.2018.2889678.

80

3 Algorithmic Writing Futures

Ruiz-del-Solar, J., Correa, M., Verschae, R., Bernuy, F., Loncomilla, P., Mascaró, M., Riquelme, R., & Smith, F. (2013). Bender: A general-purpose social robot with human-robot interaction capabilities. Journal of Human-Robot Interaction, 1(2), 54–75. https://doi.org/10.5898/JHRI.1. 2.Ruiz-del-Solar. Seaver, N. (2017). Algorithms as culture: Some tactics for the ethnography of algorithmic systems. Big Data & Society, 4(2), 1–12. https://doi.org/10.1177/2053951717738104. Selber, S. (2020). Institutional literacies: Engaging academic IT contexts for writing and communication. University of Chicago Press. Sharma, N., & Bathla, R. (2019, November 21–22). Coalescing artificial intelligence with augmented reality to vitalize smart-glasses [Conference session]. In 4th International Conference on Information Systems and Computer Networks (ISCON). GLA University, Mathura, UP, India. Sinders, C. (2020). People think in problems: How Amsterdam is developing civic AI to address citizens’ service requests. https://accelerate.withgoogle.com/stories/people-think-in-problems-howamsterdam-is-developing-civic-ai-to-address-citizens-service-requests. Spinuzzi, C. (2005). The methodology of participatory design. Technical Communication, 52(2), 163–174. Strang, K. D. (2016). Do the critical success factors from learning analytics predict student outcomes? Journal of Educational Technology Systems, 44(3), 273–299. Striphas, T. (2015). Algorithmic culture. European Journal of Cultural Studies, 18(4–5), 395–412. Taguchi, N., Kaufer, D., Gomez-Laich, P. M., & Zhao, H. (2017). A corpus linguistics analysis of on-line peer commentary. In K. Bardovi-Harlig & C. Felix-Brasdefer (Eds.), Pragmatics and language learning, Vol. 14 (pp. 357–170). National Foreign Language Resource Center, University of Hawai‘i at M¯anoa. Tan, Z. M., Aggarwal, N., Cowls, J., Morley J., Taddeo, M., & Floridi, L. (2020, September 22). The ethical debate about the gig economy: A review and critical analysis. SSRN. https://papers. ssrn.com/sol3/papers.cfm?abstract_id=3669216. TedX Talks. (2020, February 19). AI literacy, or why understanding AI will help you every day | Jordan Harrod | TEDxBeaconStreet [Video]. YouTube. https://www.youtube.com/watch?v=cKl 9QsVv7hY. Thompson, C. (2019). Coders: The making of a new tribe and the remaking of the world. Penguin Random House. Vincent, J. (2019, December 12). AI R&D is booming, but general intelligence is still out of reach. The Verge. https://www.theverge.com/2019/12/12/21010671/ai-index-report-2019-machine-lea rning-artificial-intelligence-data-progress. Wang, Y.-H. (2017). Exploring the effectiveness of integrating augmented reality-based materials to support writing activities. Computers & Education, 113, 162–176. Watters, A. (2020, July 20). Building anti-surveillance ed-tech. Hack Education. http://hackeduca tion.com/2020/07/20/surveillance. Willson, M. (2020). Questioning algorithms and agency: Facial biometrics in algorithmic contexts. In M. Filimowicz & V. Tzankova (Eds.), Reimagining communication: Mediation (pp. 252–266). Routledge. Wilson, A., Watson, C., Thompson, T. L., Drew, V., & Doyle, S. (2017). Learning analytics: Challenges and limitations. Teaching in Higher Education, 22(8), 991–1007. World Intellectual Property Organization. (2019). WIPO technology trends 2019: Artificial intelligence. https://www.wipo.int/edocs/pubdocs/en/wipo_pub_1055.pdf. Yeung, K. (2017). Algorithmic regulation: A critical interrogation. Regulation and Governance, 12(4), 505–523. https://doi.org/10.1111/rego.12158. Zhou, C. (2020, March 11). Augmedix uses AI and remote scribes to improve clinician documentation: Interview with founder Ian Shakil. Medgadget. https://www.medgadget.com/2020/03/ augmedix-uses-ai-and-remote-scribes-to-improve-clinician-documentation-interview-with-fou nder-ian-shakil.html. Zuboff, S. (2019). The age of surveillance capitalism: The fight for a human future at the new frontier of power. Hachette Book Group.

Intertext—Recoding Relationships …

81

Intertext—Recoding Relationships by Jennifer Keating, University of Pittsburgh, and Illah Reza Nourbakhsh, Carnegie Mellon University When we consider writing from a relational perspective, the writer and reader are key actors in the configuration of a dyad. The writer undertakes the challenge of balancing the intentional message of their utterance through the written word with ensuring its accessibility for a reading audience. Algorithmic writing, more than any recent technological advance, threatens to disrupt the structure of the relationships between writer and reading audience in dramatic ways. Algorithms reconfigure this dyad by directly manipulating these roles, reconfiguring the dyad to a three-part interrelationship. Algorithmic writing recodes the human writer’s role in the relational communication practices, just as the algorithm interrupts and modulates the flow of information that the audience will access as readers. To privilege the humans in this dyad, we must disambiguate the overloaded terms reader and writer, as algorithmic writing is often applied to computational processes as readily as to the human reading audience in a reader–writer dyad. Conceptions of authorship and interpretation are intertwined with the lexical basis of how we talk about hybrid models, where algorithms serve as tools to supplement the skill and practice of a human writer, and where algorithms also help human readers analyze and investigate the written word. In the service of careful terminology to stabilize our concerns, we will refer to computational data acquisition and conversion of that data into grammatical prose as production and we will refer to computational analysis of language as parsing, distinguishing these objective, computer-run processes from writing and reading, respectively. Going back to the reader–writer dyad, the prospect of algorithmic writing suggests the possibility of new levels of opacity as to the identity of the actors who are transferring information. A human reader does not know, just based upon the written word, whether the writer is co-author with a production system, nor does the human writer know if her text is being read (by a human audience) or parsed (by an algorithm), or both (Gallagher, 2017). Alterations to written work shaped by the algorithm can have immediate and longstanding influences on the practice of writing, which, in turn, recalibrate the relationships between writer and computational producer—i.e., the writer and the algorithm—and therefore reinscribe agency in regard to the roles undertaken by writer, producer, parser, and reading audience in the relational stances of writing and reading practices (Willson, 2020). Algorithmic writing inserts an opaque veil that mitigates the relationship between the writer and their craft, and the audience and their reading practices. Agency might be asserted by either party, or it might be just out of reach as the limits of our awareness of the relationship between the writer and the algorithm blur. In short, algorithmic writing generates algorithmic fog in regard to authorship and meaning making as a reading audience undertake their reading practices. The unknowable nature of this fog recodes writer–audience relationships in four distinct ways that change the features of human agency, identity, and power in writing–reading relationships.

82

3 Algorithmic Writing Futures

Reconfiguring Writer Identity Algorithms continue to become more intimate to our day-to-day decisions, becoming extensions of our own actions as chatbots and virtual assistants (Misal, 2018). But algorithmic parsing and production, as it is intimately adopted into a writer’s practice, constitutes part of the writing practice (Pedersen, 2020). In this shift in practice, the writer’s identity begins to reshape based on the prosthetic function of algorithmic tools. This hybrid synthesis between writer and tool suggests a cyborg configuration of the human and the digital production tool. Does the writer consider this hybrid production authorship or co-authorship? Depending on the answer to such a question, the concept of authorship/co-authorship in a single writer threatens the long-held understanding of integrity in authorship and the generally understood relationship between a writer and her reading audience. The algorithmic fog in this case results in a new and not-yet-deciphered fence line in regard to the concept of authorship. This is based on the writer’s integrity and practice as author, as well as the reading audience’s perception of the writer(s) work as author or co-author in collaboration with algorithmic tools. From Rhetoric to Algorithmic Advocacy Algorithmic writing also changes the relative power of rhetorical argument in contrast with evidentiary reasoning. Just as a Google web search massively simplifies acquiring detailed information that was formerly difficult to obtain, so algorithmic parsing can access statistics, examples, and bodies of evidence that are otherwise virtually unobtainable. Such configuration of information affords the building of arguments through the power of data. Of course data inherits bias and can be greatly mischaracterized, so the facility of algorithmic data advocacy does not necessarily suggest more equitable arguments or the prospect of arguments with higher levels of sophistication and persuasion. But as the technical competence of algorithmic advocacy solutions improves, the algorithm-empowered writer will face an evereasier time creating customizing arguments suffused with data that nearly drowns out human subjectivity and exhibitions of rationality. A tempting new algorithmic “weapon” of argument might be easier to use than conventional rhetorical techniques. Such moves might also reinscribe features of argumentation as understood in specific genres. This can have repercussions both good and bad, depending entirely on the intentions of the writers who are most productive, the circulation of argumentations, and the potential consequence of where such arguments are presented. Inventing the Algorithmic Producer As Gallagher (2017) and others have suggested, the creation of algorithmic producers recodes the writer’s relationship to the audience in significant ways. Without transparent acknowledgement of authorship or co-authorship between writer and producer, the reading audience are not offered a transparent exchange in their reading practice. These exchanges are further mitigated by genre, form, and other features of the writer–audience relationship. When algorithmic production is the arbiter of

Intertext—Recoding Relationships …

83

human audience type and size, for instance, through search engine response engineering to maximize click-through, then the first audience of a human writer moves from the reader in a simple dyad to those chosen by algorithmic gatekeepers that decide if readers will be exposed to the work, and indeed which categories of readers will have exposure. The writer’s task changes to a metacognitive challenge of manipulating ever-changing production algorithms to reach the intended audience. The algorithmic arms race, from computer to computer, can overtake the original dyad of communication entirely, leaving the eventual human reader with a text that is considerably outside of the exchange or relationship that they may believe they are engaging. Only after modulation by algorithms, and consumption by yet other algorithms, do they come in contact with the text without transparent narration of the backdrop processes. Reconfiguring Reader Power As the authors have pointed out in the use of digital humanities analysis tools (e.g., parsers), not all algorithmic technologies impact the writer’s practices directly or indirectly. A reading audience will have ever-improving analytic parsing tools that provide new methods for making meaning, for discerning comprehension beyond original reading and reflecting. Global comparison, statistical analysis, data-oriented evaluation—these are all newly sophisticated parsing tasks attuned to the specific capacities of an algorithmic parser in contrast to a human reading audience. It is not at all clear how these new digital-analytic tasks interrelate to the core practices or comprehension, reflection, and extension p ractices associated with a human reading audience—and therefore how a reading audience’s power will erode or enlarge through new digital parsing tools that might enhance particular features of meaning. Conclusion Are these recoded writer–reader relationships part of a desired station we wish to achieve in society? This depends, fundamentally, on the question of power and its effect on relationships between people, particularly in these reader–writer relationships. It is also a concern in regard to the role that these tools play as prosthetics, and how transparent communications on the use of such enhancements are communicated to all parties in these relationships through writing and reading. As with all technology, power concentrated in the hands of those who trade information can cause further exacerbations of societal inequity. Frequently, the wealthiest classes invent and facilitate their own use of algorithmic power for corporate and governmental gain, reducing the relative power readers have when, for instance, information served to them is algorithmically manipulated to elicit a specific goal, affect, or belief. But there can be a counter-narrative. If algorithmic power is distributed to marginalized populations—if in fact the subaltern can use algorithmic production to co-author a body of convincing evidence documenting their injustice—then algorithms can serve as a corrective effect in argumentative exchanges and their influence on power negotiations of many kinds, whether political, social, cultural or economic. Such enhancements in communication practices can reinscribe power relationships by providing agency precisely where society has historically compounded inequities of

84

3 Algorithmic Writing Futures

vulnerable populations and groups. Civic advocacy, thus served, can further buttress our strongest democratic ideals if that is indeed a goal. In the end the question is, Who among us will the algorithmically supercharged writers be? This is a question we must actively answer if we are to have a chance at ensuring that this technology does not further drive us into ever-more-imbalanced societal configurations.

References Gallagher, J. (2017). Writing for algorithmic audiences. Computers and Composition, 45, 25–35. Misal, D. (2018, September 7). What is the difference between a chatbot and virtual assistant. Analytics India Magazine. https://analyticsindiamag.com/what-is-the-difference-between-a-cha tbot-and-virtual-assistant/. Pedersen, I. (2020). Will the body become a platform? Body networks, datafied bodies, and AI futures. In I. Pedersen & A. Iliadis (Eds.), Embodied computing: Wearables, implatables, embeddables, ingestibles (pp. 21–48). MIT Press. Willson, M. (2020). Questioning algorithms and agency: Facial biometrics in algorithmic contexts. In M. Filimowicz & V. Tzankova (Eds.), Reimagining communication: Mediation (pp. 252–266). Routledge.

Chapter 4

Autonomous Writing Futures

4.1 How Will Writers Work with Autonomous Agents? We have discussed a dialogic approach toward collaboration that requires closer attention to emergent socio-technical assemblages, which are contextualizing writing practices. In Chap. 2, “Collaborative Writing Futures,” we used a corporate video as a springboard for concepts. Apple at Work—The Underdogs illustrates how a fictional collaborative team of four individuals work under pressure amid moments of humorous panic to design a round pizza box. The crisis is time, no one has enough of it, and the chance for career advancement hangs in the balance. The team work feverishly with three different departments (Finance, Warehouse, and Creative) to produce the pitch and prototype. Apple’s devices and tools are advertised throughout the video, but also present is Siri, Apple’s digital assistant, who works as a partner and a quiet rational contributor to the team. The creative idea for a round pizza box is transformed into digital sketches, text descriptions, material prototypes, photos, and even augmented reality, sent across countless analytic platforms with efficiency. While not depicted in this ad, we know that many complex automated processes operate to support the work, and some of the network exchanges are hidden. As these collaborative sessions take place at work, in kitchens, and in taxicabs mediated by phones, tablets, watches, and notebooks, data is also mined. As work and personal life are entangled, platforms monitor personal website visits, physical locations, sleep patterns, and the personal communications of these four (fictional) people to feed a growing artificial intelligence (AI) ecosystem dedicated to automation and efficiency. The ad’s narrative exemplifies the dynamic dialogic conversation occurring among participants as they live and work. However, we acknowledge there is more to this story that remains hidden. The most provocative change instigated by AI is automation. This chapter’s focus, autonomous writing futures, surveys specific examples of the broad global movement toward automating writing processes through the lens of our framework: social, literacy, and civic engagement (Table 4.1). Brundage et al. (2018) write, “AI enables © The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 A. H. Duin and I. Pedersen, Writing Futures: Collaborative, Algorithmic, Autonomous, Studies in Computational Intelligence 969, https://doi.org/10.1007/978-3-030-70928-0_4

85

86

4 Autonomous Writing Futures

Table 4.1 Writing Futures framework, autonomous writing futures Social Autonomous How will writers work writing futures with autonomous agents? Social robots; Cognitive assemblages; Digital assistant platforms; Cloud-based AI; Chatbots; Brain—computer interaction; Natural language generation

Literacy

Civic engagement

How will writers contextualize future uses of digital-assistant platforms throughout writing? How will literacy practices change with the use of autonomous agents? Literacy for teaching AI assistants and learning from them

What affordances of autonomous agents lend themselves to more ethical, personal, professional, global, and pedagogical deployments? Non-discrimination; AI transparency; Values and characteristics

changes in the nature of communication between individuals, firms, and states, such that they are increasingly mediated by automated systems that produce and present content” (p. 33). In response, this chapter pushes boundaries to speculate about writing futures and future writers. Following Floridi (2013), Dignum (2019) defines autonomy as “the capacity of an agent to act independently and to make its own free choices” (p. 17). She goes on to explain that autonomy “is both seen as a synonym for intelligence, as well as that characteristic of AI that people are most concerned about” (p. 18). Traditionally the disturbing assumption about automation is that “increases in automation must come at the cost of lowering human control” (Shneiderman, 2020, p. 495). Rather than dismiss this fear, we incorporate the valid concerns people have about autonomous technology in writing fields. We point to socio-technical assemblages that involve automating aspects of creative production, which we acknowledge covers a great range of technologies. We discuss AI systems deployed on data platforms, devices, and actual social, anthropomorphized robots. However, the chapter also focuses on human agency and the changing role of the writer. Tham et al. (2018) write “as immersive technologies continually quantify our being and identity (Neff & Nafus, 2016), whether phenomenologically or neurologically (Gruber, 2016), we must expand our theoretical capacity to study how this phenomenon affects or shifts the writer’s sense of self and agency” (p. 1).

4.1.1 The Rise of Virtual Assistants Alexa, how do you spell “superfluous”? Siri, what does “epideictic” mean? Virtual assistants are personalized digital software-based agents that use AI algorithms to perform tasks. The component that makes them most compelling is AI agency, defined as the capacity to act autonomously (independently), to adapt (react to and learn from changes in the environment), and to interact (perceive and respond to other human and artificial agents) (Dignum, 2019, p. 17). “Virtual assistants have access

4.1 How Will Writers Work with Autonomous Agents?

87

to a broader and more personal range of data than, say, a search engine” (Markoff, 2019). Virtual assistants such as Siri, Alexa, or Google Assistant on mobile phones, or those on home-based devices like Amazon’s Echo, Google’s GoogleHome, or Apple’s Homepod inform everyday culture and professional practices because of their reach. Statista reports that “[as of] 2019, there are an estimated 3.25 billion digital voice assistants being used in devices around the world. Forecasts suggest that by 2023 the number of digital voice assistants will reach around eight billion units—a number higher than the world’s population” (Statista Research, 2019). The way that people have adapted en masse to recent technology such as AI agents also reveals the trajectory for how they will be used in the future. In the introduction, we discussed the utility of the Fabric of Digital Life database to reveal emerging inventions in ecosystems through not only the monitoring of commercial activities but also observing technocultural change instantiated in a range of discourses. By archiving items, Fabric captures the traces of emergence, social determinants, and implications and consequences, both positive and negative. In 2013, representations of embodied artifacts were collected at a time when carryable, mobile technologies had recently emerged and gained mass market uptake. Discussed often in this book, Google Glass provided a case study for launching this kind of ana lysis (Duin et al., 2016). From the original pilot launched by Sergey Brin in 2012 to the latest version coming to market in 2020, the Glass phenomenon provides rich insight into how participants both adopt and adapt to technologies. The post-internet era has proven that users understand their identities and their personal networks to a certain extent in terms of the devices that they hold in their hands. Ad campaigns by the most profitable companies, such as Apple or Google, heavily promote the lifestyles that follow from device ownership. At the same time, large companies use predictive advertising to promote their own brands and future products while working on R&D for technologies that might take decades to emerge, such as augmented reality; “predictive ads predict rather than announce, manipulate the expectations of participatory media and function more as entertainment rather than information” (Encheva & Pedersen, 2014, p. 11). Large corporations releasing computing devices, often perceived as luxury items, changed mainstream culture, work, communication, education, and social practices, often imbued with the theme of futurism. Likewise, the hype surrounding AI is used to justify adopting the technology before we think about the implications. Certain historic moments led to core changes in personal computing that are relevant to note today concerning AI agents. Apple has been celebrated not only for the actual products it releases, but also for the design aesthetics it fosters, which influence most other personal computing devices, even those made by competitors (Soukup, 2016, p. 55). The Apple iPod music player was released in October 2001 and became a device that millions of people carried with them everywhere. At its height in 2008, Apple sold nearly 55 million units of the iPod (Richter, 2019). Its simplicity and ubiquity, in part, inspired the Apple iPhone release in June 2007, which opened the gateway for cloud-based communication across multiple modes due to the Apple apps market. Charles Soukup (2016) explains that “mobile devices provide a lifeline that sifts through, limits, and simplifies the complexities of rapid,

88

4 Autonomous Writing Futures

vast, circulating information in postmodern culture … . [They] frame, and filter an overwhelming media-saturated culture. Mobile devices such as the iPhone and iPad clarify and constrain the chaos whirling about the hyper-mediated postmodern experience” (p. 2). While people had been socialized to the idea of highly personal computer devices to make life easier, the onslaught of networked computing platforms and the rise of AI apps followed closely, making everyday digital interactions and professional spheres much more complex than before. Apple had to continue to strive for simplicity. One of those apps working along those lines was Apple’s Siri, released in 2011 in the App Store. Siri is a virtual assistant with a natural-language user interface that speaks to users, answers questions in text, and performs actions across myriad internet services. Virtual Assistants use voice recognition, “the ability of natural language processing (NLP) software to ‘understand’ human [spoken] language” (Kugler, 2019). Today, Siri is part of most of Apple’s software platforms and communicates with users on Apple’s devices, iPhone, Watch, and AirPod, all highly integrated with users and their personal communication practices. The compelling experience of communicating with Siri is meant to mimic first that of a close personal friendship, the lifeline that people expect of Apple. However, AI agents like Siri evolved to perform far more functions. They consult, compose, work, and document information with humans through merged personal/professional spheres, behaving as if they are capable of all roles—confidantes, co-workers, and organizers. McKee and Porter (2017) write, “we’re entering a ‘post-app’ network where AI agents and their algorithms replace applications (e.g., Carter, 2016). Rather than have 50 different apps for doing different tasks—ordering pizza, reading the paper, purchasing plane tickets—people will just have one AI agent who will connect and share information with other AI agents to get tasks done” (p. 143). On another level, the race to build virtual assistants also “involves getting computer algorithms, and AI technology itself, to understand human emotions and to respond emotionally” (Pedersen, 2016). What happens when we cannot tell the difference between our human co-workers and our AI agents? The popularity of virtual assistants has helped spawn the creation of “virtual humans”: lifelike visual, virtual personas that have nuanced facial, gestural, and spoken interaction. Screen-based virtual humans mimic human physical reactions to be made to appear empathetic, unique, and mildly emotional. It will take an extensive slate of AI functional technologies to build them, but they will also combine complex technologies to operate, sensing human biofeedback of users throughout each exchange. One early prototype is Rachel, a virtual concierge app depicted with a physical body (see Fig. 4.1) who offers to take your hotel reservation in much the same manner as Siri or Alexa. The company that created Rachel, Quantum Capture, has worked collaboratively with a team of doctors at Sick Kids Hospital and Sunnybrook Health Sciences Centre in Toronto to create realistic immersive virtual reality simulations of surgical procedures, resulting in a training tool for clinicians (Webb, 2018). Using virtual humans in hospitals to serve in various roles would be a logical next step. The goal to design hyperreal “physical” qualities for virtual humans responds to the need for legitimacy (ethos) for virtual assistants to communicate in real-life scenarios, such as helping patients navigate a hospital.

4.1 How Will Writers Work with Autonomous Agents?

89

Fig. 4.1 Quantum Capture marketing video of Rachel, Virtual Hotel Concierge application (Photo permission: Quantum Capture)

Virtual humans, designed to appropriately communicate and respond to people’s queries, might reduce anxiety in situations that can be alienating. At the same time, the speculative nature of this medium is palpable, and the hype is significant. Another “virtual human” startup company, Neon, announced itself during a CES trade show, as one tech journalist wrote, “in a press release rich in hyperbole, complicated machine learning jargon and a pretty opaque mission statement. There was also the promise of Neon’s ‘reacting and responding in real-time’” (Smith, 2020). While debunking the current state of the tech, the journalist expressed the strong cultural impetus to have these kinds of rich virtual interactions with machines in the future. De Visser et al. (2018) argue for recasting relationships between humans and autonomous agents as if they are “two nearly equal collaborators” (p. 1422), rejecting the autonomous-agents-as-tools model. With virtual humans responding using lifelike gestures, facial reactions, and unexpected movements, collaborative interaction becomes much more compelling and immersive. AI is rapidly learning to read human emotion (Strong et al., 2020). There are many questions to ask about this much-more-embodied interaction with AI agents. Will they help corporate users, students, or members of the public get information? With citizens ordered under extensive pandemic rules or lockdown, could they make information accessible to quarantined people? Who will own the data and the video records of these interactions after the fact? Will these virtual bodies disrupt or further solidify gender and racial stereotypes? This is the time to ask questions about the future consequences of disruptive technology. One needs to keep in mind that the key commercial impetus for virtual assistant AI agents, such as Microsoft’s Cortana, Amazon’s Alexa, and Google’s Assistant,

90

4 Autonomous Writing Futures

has changed dramatically due to the datasphere surrounding them. McKee and Porter (2017) explain the collaborative model: More frequently than many people realize, our communications are occurring not just with fellow humans but also with AI agents who aim to ‘learn’ from us and who also, as part of their programming, gather, search, and data mine our communications, delivering to individuals and corporations petabytes (and maybe even at this point zettabytes) of data. Nearly everything we say and do online is— with and without our permission—being collected, aggregated, de-aggregated and analyzed by computer and human agents. We are immersed in digital technologies that shape how, when, where, why, and with whom we communicate. (p. 2)

The co-constitutive relationship of AI agents learning from us and mining our data to sell on the market, while we are learning and working with them, poses one of the central challenges to the Writing Futures paradigm we tackle in this book. How can AI agents be anything more than parasitic, as McKee and Porter emphasize, if these complex platforms are constantly data mining our communications? Our relationships with AI agents, and specifically virtual assistants, are at a critical development juncture due to these issues over the nature of collaboration. In research and development spheres, a paradigm shift is underway to build robust, opensource platforms for trustworthy virtual assistants. Stanford University researchers have launched a movement to achieve an open non-proprietary linguistic web where linguistic user interfaces would be made available to any virtual assistant. Markoff (2019) writes, [Stanford researchers] are encouraging makers of consumer products to connect their devices to the Almond virtual assistant through a Wikipedia-style service they call Thingpedia. It is a shared database in which any manufacturer or internet service could specify how its product or service would interact with the Almond virtual assistant.

A trustworthy AI would share data and also maintain privacy; it “would allow individuals and corporations to avoid surrendering personal information as well as retain a degree of independence from giant technology companies” (Markoff, 2019). The next phase for virtual assistants is also geared to more accurate interpretation of language: “While machine accuracy in understanding spoken words is now routinely above 90 percent, accuracy in understanding complex natural language is substantially lower” (Markoff, 2019). We conclude this part on virtual assistants with this key concept of open collaboration with AI agents based on a higher degree of trustworthiness and privacy, an idea that will continue to evolve for citizens and workers given the changing scenarios each will face. We also point to the idea of synergy. Tsvetkova et al. (2017) define human–machine networks as “assemblages of humans and machines whose interactions have synergistic effects. This means that the effects generated by the [human–machine networks] should exceed the combined effort of the individual actors involved” (p. 3). Rather than imagining or speculating future scenarios in terms of replacing human work or benefiting only corporate actors, we encourage

4.1 How Will Writers Work with Autonomous Agents?

91

the notion of synergy as a governing value and one that we return to later in this chapter. Prompt Literacy—Explore Stanford University’s Open Virtual Assistant Lab and the tools it is developing for its Almond platform. Its stated goals are “to create an open, nonproprietary linguistic web, to accelerate and democratize natural language interface technology via open collaboration, to protect privacy with interoperable virtual assistants that can run on users’ devices.” Explore some of the recent journalistic news (Markoff, 2019) surrounding how it will change the way people communicate and collaborate with machines, which is constantly evolving.

4.1.2 AI Writing Professional writing and digital publishing platform companies increasingly require human writers to employ AI for automating and analyzing writing tasks across a range of functions that are becoming ever more intertwined. This section has been named “AI Writing” to capture a scope of activities positioned for future trends. Up until now, AI and writing have often been combined together with the concept of assistantship, leading to the common usage and conceptualization of AI writing assistants, the first commercial deployments for automated AI writing technologies. And we discussed the cultural heritage of assistants in the previous section. However, co-writing content with AI is eclipsing the older notion of AI assistantship (Seeber et al., 2020). Professional writers and students can simply drop rough ideas into a writing program and “the software will recommend language to express what he or she is trying to say—cocreating with the human based on his or her ideas” (Volini et al., 2020). As AI technologies are designed to assume the agentive roles of editorship and authorship, we need to reenvision the way we conceptualize writing with AI agents. They involve a much wider breadth of writing products, including research platforms, grammar and tone editing websites and apps, text summarizers, content automation, and finally, full-scale AI article authoring. The next phase of emergence will further alter composition in broader terms. Later in this chapter we will discuss how IBM is designing an AI technology that can debate humans in language exchanges on complex topics, building persuasive arguments and framing well-informed decisions.

4.1.2.1

Natural Language Generation (NLG)

One of the most pivotal technologies for professional writing is natural language generation (NLG) (Mckinley, 2019). NLG is “the sub-field of artificial intelligence and computational linguistics that is concerned with the construction of computer systems than can produce understandable texts in English or other human languages

92

4 Autonomous Writing Futures

Fig. 4.2 Functional AI applications related to NLP

from some underlying non-linguistic representation of information” (Reiter & Dale, 1997, p. 57). More specifically, it means the “translating [of] heterogeneous information into natural language text” (Giaretta & Dragoni, 2019, p. 91). Recently, companies have released AI writing applications through NLG. While NLG draws on several AI techniques, including machine learning (ML), neural networks, and computer vision, it mainly falls under the computing branch known as natural language processing (NLP) (see Fig. 4.2). NLP is an AI functional application defined as the “use of algorithms to analyze human (natural) language data so that computers can understand what humans have written or said and further interact with them” (WIPO, 2019, p. 148). In the previous section, we discussed voice recognition, another branch of NLP that enables virtual assistants to understand people and respond to them. NLG will continue to evolve applications “that can express various voices and emotions naturally” (Oh et al., 2020) within written compositions customized for readers. NLG breaks down into further technical categories that we discuss below: textto-text generation and data-to-text generation. They involve authorship and agency by both human and nonhuman agents and datasets that are used in combination with NLG. Text-to-text generation involves software collecting publicly available digital information, often human-written texts, and applying algorithms to transform them into new texts. Gatt and Krahmer’s (2018) “Survey of the State of the Art in Natural Language Generation: Core Tasks, Applications and Evaluation” provides a list of text-to-text generation methods, which are “applications that take existing texts as their input, and automatically produce a new, coherent text as output” (p. 65): • machine translation, from one language to another • fusion and summarization of related sentences or texts to make them more concise • simplification of complex texts, for example to make them more accessible

4.1 How Will Writers Work with Autonomous Agents?

• • • •

93

automatic spelling, grammar and text correction automatic generation of peer reviews for scientific papers generation of paraphrases of input sentences automatic generation of questions, for educational and other purposes (p. 66)

To reiterate, text-to-text generation comprises tools, applications, and platforms that perform these tasks mentioned above, originating from content from human writers. Data-to-text generation converts data into a text without using previous content written by human writers. Put another way, this kind of language generation is not derived from linguistic inputs: “what distinguishes data-to-text generation [from text-to-text generation] is ultimately its input. Although this varies considerably, it is precisely the fact that such input is not—or isn’t exclusively—linguistic” (Gatt & Krahmer, 2018, p. 68). Some examples of data-to-text generation outputs are cataloged by Gatt and Krahmer: • • • • • • •

soccer reports virtual ‘newspapers’ from sensor data text addressing environmental concerns, such as wildlife tracking weather and financial reports summaries of patient information in clinical contexts interactive information about cultural artefacts, for example in a museum context text intended to persuade or motivate behavior modification (p. 67)

Understanding these two categories, and the technologies which support them, is fundamental to basic AI literacy. In the next subsections, we demonstrate several AI application fields that employ NLG for AI writing to offer a snapshot of these technologies but also some of the issues that follow from their emergence. We are suggesting that AI automation should involve cooperative scenarios among humans and machines rather than machine autonomy, in this case writing texts without humans.

4.1.2.2

Automated Content Platforms

AI writer, Wordsmith (Automated Insights), Inferkit, and United Robots all promote automated writing for writing professionals and companies. AI Writer describes itself as “a service that helps you create better content in less time! Just feed our algorithm a headline and it will do all the research work for you. Yes, it’s really that simple!” (2019) (see Fig. 4.3). AI writers are crude without training, but platforms provide custom generators through developer APIs for clients to help the tool learn to produce text according to a genre, gleaned from example documents. InferKit explains how it works: “They try to mimic [their] format, topic, tone, vocabulary and general style of writing” (InferKit, 2020). The beginnings and ends of written passages authored by human and nonhuman agents are completely entangled cognitive assemblages. However, writing with AI tools can be used for deceptive ends despite the seemingly neutral

94

4 Autonomous Writing Futures

Fig. 4.3 Screenshot of AI Writer website (Photo permission: AI-Writer.com)

offerings at these sites. InferKit quirkily describes the platform: “Creative and fun uses of the network include writing stories, fake news articles, poetry, silly songs, recipes and just about every other type of content.” The notion of producing “fake news articles,” a welldocumented negative global outcome of AI, needs to be closely monitored, but professional writers in particular are the vanguard who can help stop it by recognizing it at these nascent stages of emergence. Prompt Explore the advertising messages by automated content platforms. Is writing more about data analysis than composition? Watch this video by Marc Zionts, CEO at Automated Insights, which sells Wordsmith, a product that says it is revolutionizing “the way professionals write with data.” Zionts makes the claim that a gap exists in the area of reading data visualizations and platform dashboards. He says, “only about a third of the people in an enterprise actually can correctly look at a dashboard or visualization and take away the intended meaning of the analysts that created it.”

4.1.2.3

Robo-journalism

People are losing their jobs due to automation; writers need to be in front of digital transformation. A recent story explains how “Microsoft decided to stop employing humans to select, edit and curate news articles on its homepages”; these were people who had previously been employed as journalists (Waterson, 2020). In Chap. 2, we discussed the concept of abandoning nostalgic ideas of solo authorship to embrace writing as a dialogic activity informed by human–machine interactions.

4.1 How Will Writers Work with Autonomous Agents?

95

In the following paragraphs, we discuss automated content generation, or data-totext language generation for journalism specifically, which has significantly pushed the boundaries of algorithmic interactions between humans and nonhumans. In specific terms, robot journalism, robo-journalism, or algorithmic journalism is “the process of automatically writing complete and complex news stories without any human intervention” (Beckett, 2015). Dörr (2016) defines algorithmic journalism as the (semi)-automated process of natural language generation by the selection of electronic data from private or public databases (input), the assignment of relevance of pre-selected or non-selected data characteristics, the processing and structuring of the relevant data sets to a semantic structure (throughput), and the publishing of the final text on an online or offline platform with a certain reach (output).

News agencies such as the Associated Press adopted technology from Automated Insights in 2018 to automate news stories (see Caswell & Dörr, 2018): “the goal was for reporters to focus less on numbers and more on nuance, delivering more value to the news organizations that rely on them every day” (Automated Insights, 2020). In support of this claim, there are also positive accounts of the mundane aspects of professional writing that one can replace with automation. One journalist explains the labor of writing earnings reports and his hope that it could be automated: It was a miserable early-morning task that consisted of pulling numbers off a press release, copying them into a pre-written outline, affixing a headline, and publishing as quickly as possible so that traders would know whether to buy or sell. The stories were inevitably excruciatingly dull, and frankly, a robot probably could have done better with them. (Roose, 2014)

The argument that automation will ameliorate writers’ jobs persists not only in journalism but in most writing fields. The New York Times, Reuters, The Washington Post, and Yahoo! all use automated text generation for articles (Marr, 2019). However, the recent technological disruption surrounding journalism is an important exemplar of ideas and issues we are highlighting in this book. Traditionally, the expectation is that human journalists write articles that are informed by research, interviews, and opinions. As authors, they are protected by a heritage of basic democratic rights, such as freedom of the press. The United Nations’ Universal Declaration of Human Rights ([1948], 2020) states, “Everyone has the right to freedom of opinion and expression; this right includes freedom to hold opinions without interference, and to seek, receive, and impart information and ideas through any media regardless of frontiers.” Post-internet society is undergoing massive transformation due to the circulation of news on digital platforms with practices that fall well beyond the traditional structures of journalism. Pasquale (2015) writes about journalism in The Black Box Society: The Secret Algorithms that Control Money and Information: “The power of the old media is waning. Traditional journalism is in crisis. Some predict that investigative reporting will be sustainable only through charity” (p. 95). Fake news or the circulation of false information is the most obvious negative outcome that threatens human rights (United Nations, 2017). On the more positive side, there are initiatives to help writers and journalists use algorithmic natural language processing

96

4 Autonomous Writing Futures

tools for creativity that are not necessarily rooted in autonomous systems. One article explains that “although journalism is one of the creative industries, explicit support for the creative skills of journalists is rare” (Maiden et al., 2020). It proposes the INJECT tool, which is based on the idea that assisting creative work needs a foundation in human-centric values: “Developing ideas is a divergent and associative process that can be spontaneous and deliberate, and involves retrieving relevant items from memory and generating associations with new information. By contrast, evaluating ideas is more analytic, but can be interleaved tightly with developing ideas” (Maiden et al., 2020, p. 48). A good source to learn more about the massive transformation in journalism due to the emergence both of platforms like Google and Facebook and of AI techniques is The Impact of Digital Platforms on News and Journalistic Content, produced by the University of Technology Sydney Centre for Media Transition (Wilding et al., 2018). It describes the technology behind it, the consumer ramifications of it, and the reasons behind the problem that journalism increasingly becomes deprofessionalized. While we cannot cover the full scope of these issues in this book, we bring the process of robo-journalism to the attention of professional and technical writers as a means to chart future trajectories for the field.

4.1.3 Creative AI AI inspires philosophical quandaries because of the capacity for AI not only to learn from humans but also to create things for them. How can we ensure ethically aligned design when AI technologies perform creative tasks better than humans? For many writers, tech that features creative or artistic AI leads to questions about the future of the arts and humanities. Pew Research reports that AI “experts predicted networked artificial intelligence will amplify human effectiveness but also threaten human autonomy, agency and capabilities” (Anderson & Rainie, 2018). Pew also reports that “Advances in AI will affect what it means to be human, to be productive, and to exercise free will” (Anderson & Rainie, 2018). People who have jobs in the creative industries will need to understand automation as a core aspect of their work amid changing contexts. The risks are also evolving. The IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems (2019) has identified risks, for example, with “the possibility that AR [augmented]/ VR [virtual] realities could copy/emulate/hijack creative authorship and intellectual and creative property with regard to both human and/or AIcreated works” (p. 233). Markoff (2020a) queries the same value systems, posing “the question of whether the programs are genuinely creative. And if they are able to create works of art that are indistinguishable from human works, will they devalue those created by humans?” Siemens (2020) clarifies the crux of the paradox for learning subjects: “While debate remains unresolved regarding AI ending or augmenting humanity, dramatic short-term impacts on learning (and, as a result, on colleges and universities) can be anticipated” (p. 60). We encourage professional writers, teachers,

4.1 How Will Writers Work with Autonomous Agents?

97

and students of writing to involve themselves in understanding how writing is being transformed, to be aware of creative automation, and to learn about AI technologies that are constantly unfolding. In Chap. 1, we discussed socio-technical assemblages with machines pointing to narratives often discussed in critical posthumanism, and the concept of human– machine synergy as a dialogical ideal for the future. OpenAI has been at the forefront of innovation in natural language generation. Its recent foray into developing predictive text algorithms, GPT (Generative Pretrained Transformer), has led to three significant releases: GPT, GPT-2, and GPT-3. Questions about the capacity of machines to learn to be creative have come to the fore (Seabrook, 2019). In fact, OpenAI at first held back GPT-2’s full model from the public, claiming it was powerful enough to mimic human writing, and could be “dangerous in the wrong hands” (Pringle, 2019). OpenAI was concerned GPT-2’s full model could “generate synthetic propaganda for … [certain extremist] ideologies” (Solaiman et al., 2019). Creative AI, once fully released, will significantly transform communicative actions and interactive digital behaviors. One author describes the evolution from GPT to GPT-3: The original GPT, and GPT-2, are both adaptations of what’s known as a Transformer, an invention pioneered at Google in 2017. The Transformer uses a function called attention to calculate the probability that a word will appear given surrounding words. OpenAI caused controversy a year ago when it said it would not release the source code to the biggest version of GPT-2, because, it said, that code could fall into the wrong hands and be abused to mislead people with things such as fake news. The new paper takes GPT to the next level by making it even bigger. GPT-2’s largest version, the one that was not posted in source form, was 1.5 billion parameters. GPT-3 is 175 billion parameters. GPT-3 is trained on the Common Crawl data set, a corpus of almost a trillion words of texts scraped from the Web. “The dataset and model size are about two orders of magnitude larger than those used for GPT-2,” the authors write. GPT-3 with 175 billion parameters is able to achieve what the authors describe as “meta-learning.” (Ray, 2020)

In simplified terms, meta-learning means that a machine can learn to learn. Amodei, a computational neuroscientist and OpenAI’s director of research, explains: “Until now, if you saw a piece of writing, it was like a certificate that a human was involved in it. Now it is no longer a certificate that an actual human is involved” (Seabrook, 2019). OpenAI licenses GPT-3 to Microsoft for its own products, while OpenAI accesses Microsoft’s Azure- AI platform for training models. One can conclude that GPT-3 will emerge through more commercialized platforms on a much broader scale due to this partnership. GPT-3’s evolution as a writer continues. In September 2020, The Guardian ran a story “by GPT-3,” stating, “We asked GPT-3, OpenAI’s powerful new language generator, to write an essay for us from scratch. The assignment? To convince us robots come in peace” (GPT-3, 2020). Needless to say, this Op-Ed caused some backlash in journalism circles. We discuss this somewhat provocative event in Chap. 5, focusing on how we are hailed as subjects, actual readers, writers, students, consumers, and members of the public to trust AI.

98

4 Autonomous Writing Futures

On the dystopian side of creative practices, another project sought to reveal the highly manipulative potential for AI writing: “researchers taught an AI to study the behavior of social network users, and then design and implement its own phishing bait. In tests, the artificial hacker was substantially better than its human competitors, composing and distributing more phishing tweets than humans” (Dvorsky, 2017). We conclude this section on Creative AI Agents with these final examples because they serve as a segue for understanding the impact that autonomous agents will have on future literacy practices.

4.2 How Will Literacy Practices Change with Use of Autonomous Agents? In December 2019, the Finnish government launched an Elements of AI course for citizens with the goal to demystify AI, “to encourage as broad a group of people as possible to learn what AI is, what can (and can’t) be done with AI, and how to start creating AI methods” (Ministry of Economic Affairs and Employment, 2019). The course’s mandate is to extend AI literacy across Europe: “We want to equip EU citizens with digital skills for the future.” Claiming 430,000 participants, it takes a pan-European approach, with the “ambitious goal … to educate 1% of European citizens by 2021” (see https://www.elementsofai.com/eu2019fi). Instead of listing types of technologies and actual commercial deployments of AI, the course carefully explains major paradigm shifts in concepts, such as probability, uncertainty, forecasting, and decisionmaking: the core concepts for understanding its exponential reach. We believe that professional and technical communicators will be tasked with learning conceptual changes about autonomous technologies on an ongoing basis in order to both adapt to and adopt AI in meaningful ways. As they learn to articulate a revisioning of human–nonhuman collaboration, in a context that is rapidly changing, this kind of core conceptual foundation will help. At the same time, previous models are also helpful. Another point about AI literacy and autonomous technologies that we emphasize is that the field of professional and technical communication has fostered relevant resources. A recent news article makes the argument that the AI ethical principle of human control should function not only as a civic value system, but as a design goal for AI automation (Markoff, 2020b). It suggests that AI automation should involve cooperative scenarios among humans and machines rather than accepting machine autonomy, or systems that run without human intervention or even human understanding of the algorithms. In the article, examples involve degrees of human control for self-driving vehicles that should form cooperative scenarios. We believe that casting the same principle over natural language generation (NLG) will help instructors and students accommodate the concept of co-writing with AI in a more productive manner. The article’s basis for the “human control” argument draws on established theories for human–computer

4.2 How Will Literacy Practices Change with Use of Autonomous Agents?

99

interaction (HCI) that led to much improved interfaces for computer users over several decades (Shneiderman, 1998). At the heart of this framework and within the idea of an AI literacy lies the issue of changing roles and infrastructures. Understanding technological agency developed out of human and non-human collaboration involves looking to future landscapes. A good example is robots that have been deployed as teachers, tutors, peers, tools, and caregivers. They will be increasingly tasked with leading human-instigated projects in the role of authorities. Work is underway to better humanize their interactions with people (Sciutti et al., 2018). Speculation surrounding the combination of autonomous technologies and humanoid or social robots is rampant. Fabric of Digital Life monitors the emergence of anthropomorphized, humanoid, social robots through the Humanoid Robots collection (Cooper, 2017–2020), which explores the rapid emergence of robots specifically designed to look like and interact with humans, to be embodied, which sometimes also includes behaving in ambient embodied relationships. It historicizes and contextualizes human–robot interaction (HRI) as a rapidly evolving, datablended field. At the time of writing, the collection has nearly 258 items with 96 representations that concentrate on both pre-release prototypes and commercial inventions, some of which are now products. Pedersen and Iliadis (2020) discuss ambient relationships: “the broader goal for ambient technologies is humans augmented in external spaces and places to achieve myriad interrelations with AI technologies (gait and heat detection, facial recognition, and so on) rather than humans working with single-use devices to achieve isolated goals (e.g., getting fit, paying for goods)” (p. xix). The most popular commercial robot today is Pepper from SoftBank Robotics. Its advertising claimed in 2014 that “Pepper is the first robot designed to live with humans. Engaging and friendly, Pepper is much more than a robot, he’s a companion able to communicate with you through the most intuitive interface we know: voice, touch and emotions” (see Fig. 4.4). Six years later, during the 2020 COVID-19 pandemic, Pepper was marketed as a multi-use communicating agent helpful across many sectors including health, retail, and office environments. This video features Pepper explaining its advancement in culturalawareness to British MPs in 2017. If robots like Pepper advance to better incorporate voice recognition, natural language generation, and computer vision, they will be able to communicate in more advanced ways. The Report of COMEST on Robotics Ethics prepared by the World Commission on Ethics (COMEST) of UNESCO provides a good source for understanding the social implications, projected in future visions for the next stages. Prompt Explore the multiple forms of technological embodiment that robots instigate. Robots are planned to transform our workplaces, homes, classrooms, and care homes. Notice the convergences among nonhuman agents taking place to layer skills. Robots are hosting virtual assistants like Amazon Alexa, to provide people with access to corporate platforms leading to more integrated ambient interactivity. Project a future for when human–robot interaction will advance communication abilities for both humans and robots by looking at examples of current prototypes and use cases.

100

4 Autonomous Writing Futures

Fig. 4.4 Pepper from SoftBank Robotics Europe (2020) (CC BY-SA 4.0)

4.3 What Affordances of Autonomous Agents Lend Themselves to More Ethical, Personal, Professional, Global, and Pedagogical Deployments? Professional and technical communicators have a crucial role to play as the world comes to terms with AI on so many different fronts. Governments are trying to set standards to protect citizens. Lawmakers are wading into the difficult task of interpreting how AI should function through already established laws and practices. Private sector innovators pushing technology forward grapple with ethical principles because AI will perform functions never tested before. Balanced, credible resources are hard to find outside of public sphere discourse that tends to hype the discussion. Civic engagement with AI has been interpreted through value systems now codified in AI principle documents published by governments, companies, advocacy groups, international organizations (United Nations, WIPO, World Economic Forum, 2019), and other stakeholders. In response, global organizations are agreeing to

4.3 What Affordances of Autonomous Agents …

101

principles to manage both the affordances and the harmful consequences identified for AI’s transformative future. The point is to guide practices in every sector toward achieving ethical outcomes. However, every principle document is motivated by the will of the stakeholder that produces it. Prompted by this plethora of principle documents and the heterogeneous nature of the values, researchers at the Berkman Klein Center for Internet and Society at Harvard University produced the Principled Artificial Intelligence Project: A Map of Ethical and Rights-Based Approaches to Principles for AI (Berkman Klein Center for Internet and Society, 2019). They collected 32 principles documents, identifying “up to 80 data points about each one, including the actor behind the document, the date of publication, the intended audience, and the geographical scope, as well as detailed data on the principles themselves.” From this method they identified eight themes: Promotion of Human Values, Professional Responsibility, Human Control of Technology, Fairness and Non-Discrimination, Transparency and Explainability, Safety and Security, Accountability, and Privacy. They are all important and figure into many aspects of this book. For this chapter on AI automation and writing, we concentrate on Fairness and Non-discrimination and AI Explainability and Transparency.

4.3.1 Fairness and Non-discrimination The Principled Artificial Intelligence Project chooses six subprinciples for the Fairness and Nondiscrimination principle, which can serve as a conceptual definition of its values: Nondiscrimination and the Prevention of Bias, Fairness, Inclusiveness in Design, Inclusiveness in Impact, Representative and High Quality Data, and Equality. The complexity of automation is the fact that algorithms are involved in value-based judgments, decision-making, and even instigating actions without human oversight. The social implications of artificial intelligence and algorithmic decision-making on citizens have not been fair. Structural bias in AI is a socio-technical problem, and one that professional and technical communicators will need to address in regular work practices. The 2018 AI Now report (Whittaker et al., 2018) identifies the accountability gap: The AI accountability gap is growing: The technology scandals of 2018 have shown that the gap between those who develop and profit from AI—and those most likely to suffer the consequences of its negative effects—is growing larger, not smaller. There are several reasons for this, including a lack of government regulation, a highly concentrated AI sector, insufficient governance structures within technology companies, power asymmetries between companies and the people they serve, and a stark cultural divide between the engineering cohort responsible for technical research, and the vastly diverse populations where AI systems are deployed. These gaps are producing growing concern about bias, discrimination, due process, liability, and overall responsibility for harm. (p. 7)

While there are many solutions, including establishing government regulations, researchers are beginning to develop tools and processes to close the accountability gap through multidisciplinary teams of people who will conduct activities across

102

4 Autonomous Writing Futures

organizations. One idea relevant to educators as well as writing practitioners is an internal audit: “internal audits complement external accountability, generating artifacts or transparent information [70] that third parties can use for external auditing, or even end-user communication” (Raji et al., 2020, p. 35). Professional and technical communicators already trained for risk communication will bring necessary skills to these teams. Methods to deal with fairness and non-discrimination are the subject of a growing field of research. Buolamwini is a researcher and Ph.D. candidate at the MIT Media lab who has developed a methodology to uncover racial and gender bias in AI services from large multinationals (http://gendershades.org); it “pilots an intersectional approach to inclusive product testing for AI” (Buolamwini & Gebru, 2020). Her important earlier TedX talk How I’m fighting bias in algorithms (TEDxBeaconStreet, 2016) also provides suggestions for changing algorithmic bias relevant to professional and technical communication scholars and students. Buolamwini and Gebru (2018) discuss algorithmic discrimination and “present an approach to evaluate bias present in automated facial analysis algorithms and datasets with respect to phenotypic subgroups.” The CCCC Black Technical and Professional Communication Position Statement with Resource Guide includes a section on Black User Experience Design and lists experts, designers, and practitioners “who push against the marginalization of Black lived experiences in design thinking” (McKoy et al., 2020). Prompt Civic Engagement—Read Discriminating Systems: Gender, Race, and Power in AI by the AI Now Institute, New York University. Its website describes its work on bias and inclusion: “Artificial intelligence systems ‘learn’ based on the data they are given. This, along with many other factors, can lead to biased, inaccurate, and unfair outcomes. AI Now researches issues of fairness, looking at how bias is defined and by whom, and the different impacts of AI and related technologies on diverse populations.”

4.3.2 AI Explainability and Transparency One of the most difficult challenges that will fall to professional and technical communicators will be conveying how these systems work for experts and non-experts alike. Wallach (2017) calls it the “opacity/transparency challenge” that researchers are largely unable to interpret AI’s algorithms: The challenge lies in understanding the middle layers between input and output. Researchers are largely unable to read the data represented in these middle layers. In other words, they have no understanding of how a deep learning system reaches its conclusions. This opacity in learning systems may not matter for most applications. But a lack of scrutability or meaningful transparency can undermine the acceptability of deploying systems in situations where harm may occur to people, animals, the environment or institutions. (Wallach, 2017, p. 7)

4.3 What Affordances of Autonomous Agents …

103

One principle that is accepted as crucial across all sectors is explainability and transparency for AI. The recommendation of the IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems (2019) states that “when the system cannot explain some of its actions, technicians or designers should be available to make those actions intelligible” (p. 182). Floridi and Cowls (2019) collect several of these global proposals working to establish principles for AI, noting that the workings of AI “are often invisible or unintelligible to all but (at best) the most expert observers.” As such, they also identify the important role of interpreting and explaining AI as one of the most demanding and complex tasks. Some principle documents recommend directly that it should be a policy imperative “for industry, academia, and government to communicate accurately to the public” (IEEE Global Initiative, 2019, p. 205). The AI Now Institute explains, “Remedying bias in AI systems is almost impossible when these systems are opaque. Transparency is essential, and begins with tracking and publicizing where AI systems are used, and for what purpose.” However, missing in most of these proposals is the recognition that professional and technical communication is the established discipline armed with both research and professional practices to deal with issues such as explainability, transparency, user advocacy, and communicating technical information with multiple stakeholders. As part of the Writing Futures framework, we ask you to enable and engage with autonomous agents and intelligent systems, including AI virtual assistants, deploying robots to the front lines of mediated learning, building bonds and trust with nonhuman collaborators, and learning from and with them. Evolve and regenerate writing futures through new forms of collective intelligence. To do so requires investigating and planning in academic, industry, and civic contexts.

References AI Writer. (2019). How to write good blog articles. AI-Writer.com. http://ai-writer.com/blog/howtowrite-good-blog-articles.html. Anderson, J., & Rainie, L. (2018, December 10). Artificial intelligence and the future of humans. Pew Research Center. https://www.pewresearch.org/internet/2018/12/10/artificialintelligenceand-the-future-of-humans/. Automated Insights. (2020). Customer stories: Associated Press. https://automatedinsights.com/cus tomer-stories/associated-press/. Beckett, S. (2015, September 12). Robo-journalism: How a computer describes a sports match. BBC News. www.bbc.com/news/technology-34204052. Berkman Klein Center for Internet and Society. (2019, July). Introducing the Principled Artificial Intelligence Project. Cyberlaw Clinic. https://clinic.cyber.harvard.edu/2019/06/07/introducingthe-principled-artificialintelligence-project/. Brundage, M., Avin, S., Clark, J., Toner, H., Eckersley, P., Garfinkel, B., Dafoe, A., et al. (2018). The malicious use of artificial intelligence: Forecasting, prevention, and mitigation. https://arxiv. org/ftp/arxiv/papers/1802/1802.07228.pdf. Buolamwini, J., & Gebru, T. (2018). Gender shades: Intersectional accuracy disparities in commercial gender classification. Proceedings of Machine Learning Research, 81, 1–15.

104

4 Autonomous Writing Futures

Buolamwini, J., & Gebru, T. (2020). Gender shades. http://gendershades.org. Caswell, C., & Dörr, K. N. (2018). Automated journalism 2.0: Event-driven narratives. Journalism Practice, 12(4), 477–496. https://doi.org/10.1080/17512786.2017.1320773. Cooper, J. (2017–2020). Humanoid Robots collection. Fabric of Digital Life. https://fabricofdigital life.com/index.php/Browse/objects/facet/collection_facet/id/18. de Visser, E. J., Pak, R., & Shaw, T. H. (2018). From ‘automation’ to ‘autonomy’: The importance of trust repair in human–machine interaction. Ergonomics, 61(10), 1409–1427. https://doi.org/ 10.1080/00140139.2018.1457725. Dignum, V. (2019). Responsible artificial intelligence: How to develop and use AI in a responsible way. Switzerland: Springer Nature. Dörr, K. N. (2016). Mapping the field of algorithmic journalism. Digital Journalism, 4(6), 700–722. https://doi.org/10.1080/21670811.2015.1096748. Duin, A. H., Moses, J., McGrath, M., & Tham, J. (2016). Wearable computing, wearable composing: New dimensions in composition pedagogy. Computers and Composition Online. Dvorsky, G. (2017, September 11). Hackers have already started to weaponize artificial intelligence. Gizmodo. https://gizmodo.com/hackers-have-already-started-to-weaponizeartificial-in1797688425. Elements of AI. (2019). https://www.elementsofai.com/eu2019fi. Encheva, L., & Pedersen, I. (2014). ‘One day … ’: Google’s Project Glass, integral reality and predictive advertising. Continuum, 28(2), 235–246. https://doi.org/10.1080/10304312.2013. 854874. Floridi, L. (2013). The ethics of information. Oxford University Press. Floridi, L., & Cowls, J. (2019). A unified framework of five principles for AI in society. Harvard Data Science Review, 1(1). https://doi.org/10.1162/99608f92.8cd550d1. Gatt, A., & Krahmer, E. (2018). Survey of the state of the art in natural language generation: Core tasks, applications and evaluation. Journal of Artificial Intelligence Research, 61, 65–170. https:// doi.org/10.1613/jair.5477. Giaretta A., & Dragoni, N. (2019). Community targeted phishing. In P. Ciancarini, M. Mazzara, A. Messina, A. Sillitti, & G. Succi (Eds.), Proceedings of 6th International Conference in Software Engineering for Defence Applications. SEDA 2018. Advances in Intelligent Systems and Computing, vol. 925. Springer. GPT-3. (2020, September 8). A robot wrote this entire article. Are you scared yet, human? Guardian. https://www.theguardian.com/commentisfree/2020/sep/08/robot-wrote-thisarticle-gpt-3. Gruber, D. (2016). Reinventing the brain, revising neurorhetorics: Phenomenological networks contesting neurobiological interpretations. Rhetoric Review, 35, 239–253. IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems. (2019). Ethically aligned design 1st edition: A vision for prioritizing human well-being with autonomous and intelligent systems. https://standards.ieee.org/content/dam/ieeestandards/standards/web/documents/ other/ead1e.pdf; https://standards.ieee.org/industry-connections/ec/autonomous-systems.html. InferKit (2020). Custom Generators. https://inferkit.com/docs/custom-generators. Kugler, L. (2019). Being recognized everywhere: How facial and voice recognition are reshaping society. Communications of the ACM, 62(2), 17–19. Maiden, N., Zachos, K., Brown, A., Apostolou, D., Holm, B., Nyre, L., et al. (2020). Digital creativity support for original journalism. Communications of the ACM, 63(8), 46–53. https:// doi.org/10.1145/3386526. Markoff, J. (2019, June 14). Stanford team aims at Alexa and Siri with a privacy-minded alternative. New York Times. http://www.nytimes.com/2019/06/14/technology/virtualassistants-privacy.html. Markoff, J. (2020a, April 8). You can’t spell creative without A.I. New York Times. http://www.nyt imes.com/2020/04/08/technology/ai-creative-software-language.html. Markoff, J. (2020b, May 21). A case for cooperation between machines and humans. New York Times. http://www.nytimes.com/2020/05/21/technology/ben-shneiderman-automationhumans.html.

References

105

Marr, B. (2019, May 13). Artificial intelligence can now write amazing content—what does that mean for humans? Forbes. http://www.forbes.com/sites/bernardmarr/2019/03/29/artificial-intell igence-can-now-writeamazing-content-what-does-that-mean-for-humans/. McKee, H., & Porter, J. (2017). Professional communication and network interaction: A rhetorical and ethical approach. Routledge. Mckinley, N. (2019, October 22). Impacts of artificial intelligence in content writing. https://med ium.com/towards-artificial-intelligence/impacts-of-artificial-intelligence-incontent-writing-3c8 c065a3e19. Mckoy, T., Shelton, C.D., Sackey, D., Jones, N. N., Haywood, C., Wourman, J., & Harper, K.C. (2020). CCCC black technical and professional communication position statement with resource guide. Conference on College Composition and Communication 2020. National Council of Teachers of English. https://cccc.ncte.org/cccc/black-technical-professionalcommunication. Ministry of Economic Affairs and Employment, Finland. (2019, December 10). Finland to invest in the future skills of Europeans—training one percent of EU citizens in the basics of AI [Press release]. https://eu2019.fi/en/-/suomen-eu-puheenjohtajuuden-aloite-suomiinvestoi-eurooppal aisten-tulevaisuustaitoihin-tavoitteena-kouluttaa-prosentti-eukansalaisista-tekoalyn-perus. Neff, G., & Nafus, D. (2016). Self-tracking. MIT Press. Oh, C., Choi, J., Lee, S., Park, S., Kim, D., Song, J., Kim, D., Lee, J., & Suh, B. (2020). Understanding user perception of automated news generation system. Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems (CHI ’20) (pp. 1–13). Association for Computing Machinery. https://doi.org/10.1145/3313831.3376811. Pasquale, F. (2015) The black box society: The secret algorithms that control money and information. Harvard University Press. Pedersen, I. (2016). Home is where the AI heart is. IEEE Technology and Society Magazine, 35(4), 50–51. Pedersen, I., & Iliadis, A. (Eds.). (2020). Embodied computing: Wearables, implantables, embeddables, ingestibles. MIT Press. Pringle, R. (2019, February 25). The writing of this AI is so human that its creators are scared to release it. CBC News. https://www.cbc.ca/news/technology/ai-writer-disinformation1.5030305. Raji, I. D., Smart, A. White, R. N., Mitchell, M., Gebru, T., Hutchinson, B., Smith-Loud, J., Theron, D., & Barnes, P. (2020, January 27–30). Closing the AI accountability gap: Defining an end-to-end framework for internal algorithmic auditing [Conference session]. Conference on Fairness, Accountability, and Transparency (FAT* ’20), Barcelona, Spain. https://doi.org/10. 1145/3351095.3372873. Reiter, E., & Dale, R. (1997). Building natural-language generation systems. Natural Language Engineering, 3, 57–87. Richter, F. (2019, May). Infographic: The Slow Goodbye of Apple’s Former Cash Cow. Statista Infographics. www.statista.com/chart/10469/apple-ipod-sales/. Ray, T. (2020, June 1). OpenAI’s gigantic GPT-3 hints at the limits of language models for AI. ZDNet. https://www.zdnet.com/article/openais-gigantic-gpt-3-hints-at-the-limits-oflang uage-models-for-ai/. Roose, K. (2014, July 11). Robots are invading the news business, and it’s great for journalists. New York. https://nymag.com/intelligencer/2014/07/why-robot-journalism-is-great-forjou rnalists.html. Sciutti, A., Mara, M., Tagliasco, V., & Sandini, G. (2018). Humanizing human-robot interaction: On the importance of mutual understanding. IEEE Technology and Society Magazine, 37(1), 22–29. https://doi.org/10.1109/MTS.2018.2795095. Seabrook, J. (2019, October 14). Can a machine learn to write for The New Yorker? New Yorker. http://www.newyorker.com/magazine/2019/10/14/can-a-machine-learn-to-writefor-thenew-yorker.

106

4 Autonomous Writing Futures

Seeber, I., Bittner, E., Briggs, R.O., de Vreede, T., de Vreede, G-J., Elkins, A., Maier, R., Merz, A. B., Oeste-Reiss, S., Randrup, N., Schwabe, G., & Sollner, M. (2020). Machines as teammates: A research agenda on AI in team collaboration. Information & Management, 57. https://doi.org/ 10.1016/j.im.2019.103174. Shneiderman, B. (1998). Designing the user interface: Strategies for effective human-computer interaction (3rd ed). Addison-Wesley. Shneiderman, B. (2020). Human-centered artificial intelligence: Reliable, safe & trustworthy. International Journal of Human-Computer Interaction, 36(6), 495–504. Siemens, G. (2020). The post-learning era in higher education: Human + machine. EDUCAUSE Review, 2020(1), 60–61. https://er.educause.edu/-/media/files/articles/2020/3/er20_1111.pdf. Smith, M. (2020, January 8). Neon’s ‘artificial human’ avatars could not live up to the CES hype. Engadget. https://www.engadget.com/2020-01-08-neon-artificial-human-avatars-ceshypecould-not-live-up-to-the-ces-h.html. SoftBank Robotics Europe. (2020, June 8). Covid-19: solutions developed with Pepper for the Healthcare and Retail sectors [Video]. YouTube. https://www.youtube.com/watch?v=1xV7rd PbLkc. Solaiman, I., Clark, J., & Brundage, M. (2019, November 5). GPT-2: 1.5B Release. OpenAI. https:// openai.com/blog/gpt-2-1-5b-release/. Soukup, C. (2016). Exploring screen culture via Apple’s mobile devices: Life through the looking glass. Lexington Books. Statista Research. (2019, February). Number of voice assistants in use worldwide 2019–2023. Statista. http://www.statista.com/statistics/973815/worldwide-digital-voice-assistant-inuse/. Strong, J., Hao, K., Ryan-Mosley, T., & Cillekens, E. (2020, October 14). AI reads human emotions. Should it? MIT Technology Review. TEDxBeaconStreet. (2016, November). How I’m fighting bias in algorithms [Video]. TED.com. https://www.ted.com/talks/joy_buolamwini_how_i_m_fighting_bias_in_algorithms. Tham, J., McGrath, M., Duin, A. H., & Moses, J. (2018). Guest Editors’ Introduction: Immersive technologies and writing pedagogy. Computers and Composition, 50, 1–7. Tsvetkova, M., Yasseri, T., Meyer, E. T., Pickering, J. B., Engen, V., Walland, P. Luders, M., Folstad, A., & Bravos, G. (2017). Understanding human-machine networks: A Crossdisciplinary survey. ACM Computing Surveys, 50(1). United Nations. (2017, March 10). Amid rise of ‘fake news,’ authorities should ensure truthful info reaches public. UN News. https://news.un.org/en/story/2017/03/553122-amid-rise-fakenews-aut horities-should-ensure-truthful-info-reaches-public-un. Universal Declaration of Human Rights. ([1948], 2020). https://www.un.org/en/universaldeclar ation-human-rights/. Volini, E., Denny, B., & Schwartz, J. (2020). Superteams: Putting AI in the group. Deloitte Insights. https://www2.deloitte.com/uk/en/insights/focus/human-capitaltrends/2020/human-aicollaboration.html. Wallach, W. (2017). How to keep AI from slipping beyond our control. AI for the Common Good 2017. https://weforum.ent.box.com/v/AI4Good. Waterson, J. (2020, May 30). Microsoft sacks journalists to replace them with robots. Guardian. www.theguardian.com/technology/2020/may/30/microsoft-sacks-journalists-to-replacethemwith-robots. Webb, D. (2018). Physicians and game developers to create VR for healthcare. Canadian Healthcare Technology. https://www.canhealth.com/2018/03/01/physicians-partner-withgame-develo pers-to-create-vr-for-healthcare/. Whittaker, M., Crawford, K., Dobbe, R., Fried, G., Kaziunas, E., Mathur, V., West, S.M., Richardson, R., & Schultz, J. (2018). AI Now 2018 Report. https://ainowinstitute.org/AI_Now_2018_Report. pdf.

References

107

Wilding, D., Fray, P., Molitorisz, S., & McKewon, E. (2018). The impact of digital platforms on news and journalistic content. University of Technology Sydney Centre for Media Transition. https://www.accc.gov.au/system/files/ACCC%20commissioned%20report%20%20The% 20impact%20of%20digital%20platforms%20on%20news%20and%20journalis. World Intellectual Property Organization. (WIPO) (2019). WIPO Technology Trends 2019: Artificial intelligence. https://www.wipo.int/edocs/pubdocs/en/wipo_pub_1055.pdf.

Chapter 5

Writing Futures: Investigations

5.1 Imagining the Future On the evening of August 28, 2020, Elon Musk addressed more than 100,000 viewers live on YouTube to pitch Neuralink Corp’s brain implant chip, calling it “a Fitbit in your skull” (Bellon, 2020; Neuralink, 2020). He and several members of Neuralink’s team explained working components, demonstrated software using a group of brain-implanted pigs, and predicted next-phase techniques (Elon Musk’s vision, Economist, 2020). Team members were asked about their own visions for the future of Neuralink, and Musk revealed his: telepathy between humans. He explained that traditional speaking is inefficient and “very very slow” (Neuralink, 2020). He pointed out “that we can obviously have far better communication because we can convey the actual concepts and the actual thoughts uncompressed to somebody else so non-linguistic consensual and conceptual collaborative [communication] … [or] conceptual telepathy” (Neuralink, 2020). Musk’s argument for telepathy is based on a transhuman premise that he often makes, that humans are not good enough at doing human things and that AI will help humanity keep up, whether we like it or not (Recode, 2016). His claims concerning conceptual telepathy can be unpacked in three simple ideas—although we acknowledge that he ranges into heavily debated territory over the nature of language exchange and brain functionality in cognitive science. First, he points to a nonlinguistic, non-semiotically coded, direct means of communication between two human brains exchanging thoughts or concepts. He emphasizes the efficiency of not having to organize thoughts into spoken sentences. We note the speculative theme of automating language to move beyond speech, writing, or even encoding language. It echoes Flusser’s prediction that we referred to in the introduction: “writing, in the sense of placing letters and other marks one after another, appears to have little or no future” (p. 3). Second, Musk acknowledges that consent is important because he explains that access to a person’s thoughts should be consensual. He appears to be concerned with privacy and the bioethical danger of allowing external actors © The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 A. H. Duin and I. Pedersen, Writing Futures: Collaborative, Algorithmic, Autonomous, Studies in Computational Intelligence 969, https://doi.org/10.1007/978-3-030-70928-0_5

109

110

5 Writing Futures: Investigations

and nonhuman agents to see one’s personal thoughts. He ranges into the topic of algorithmic control and where agency lies, and asserts that it needs to be addressed. Finally, he envisions the collective nature of this kind of communication with humans and nonhuman agents as collaborative. In previous interviews, Musk claimed that the changing existential relationship between humans and AI agents would not be congenial, often mentioning AI’s threat to humanity (Recode, 2016; The Walrus, 2018; Orth, 2020). In a famous interview with celebrity blogger Joe Rogan, Musk promoted a model of collaborative symbiosis, stating of AI, “if you can’t beat it, join it” (PowerfulJRE, 2018). In 2018, one of us, Pedersen, discussed Musk’s brand of sensationalism, a powerful and deliberate rhetoric, in a public lecture. She concluded by stating that “we enculturate and vitalize embodied technology on a continuum of ideas, concepts, values, ethics that leads to adoption, but we have agency to lead it down an appropriate path. We need to design with ethical, human-centric value systems even in the earliest stages of ‘our brains becoming our computers’” (The Walrus, 2018). Technologies that can transmit information to the brain directly are in development in several well-funded neuroscience labs, leveraging big tech companies in an emerging and competitive brain–computer interface research market (Velasquez-Manoff, 2020). Prompt Recall the opening Prompt in Chap. 2, a concept video that showcases Apple products in use by a team that has a “dream” idea. We prompted readers to consider how teams will collaborate with humans and nonhuman agents, asking: How will you and your team become “technologically embodied” as you collaborate with these agents through these mobile devices? Now, at the conclusion of this book, and amid a future of evolving AI capabilities that will change communication, we reimagine this same team’s work in a future scenario: The team muses on the possibilities of a round box for pizza. The sketch has long moved from a crumpled piece of paper to a concept imagined in her brain, to a graphical prototype created from predictive algorithms that suggest the best designs for audiences. Upon getting the go-ahead for her team’s presentation, AI virtual assistants immediately launch messages, engage machines, and connect concepts and people. Gathered together in cyberspace, these members assume their roles, engaging cognitive interfaces to further prototype and design, search, connect, and present. Amid mind wandering, interfaces prod them back to the opportunities at hand, offering suggestions and efficiencies along the way. As we write this book about the future of writing, we as authors can imagine the reach of Musk’s vision. How might we cowrite, two minds composing passages in unison with the help of Neuralink’s implant? If we had AI agents implanted in our brains, tied to myriad networks and knowledge banks of information, would our compositions be dramatically improved? Memories augmented? Yet we simultaneously surmise the ethical quandary of having corporations see or even sell our inner thoughts to third parties. Biodata will inform algorithms and decision-making systems that could marginalize people. We also realize the hype and hyperbole that

5.1 Imagining the Future

111

Musk constantly fabricates during the initial phases of marketing his companies. One means to prepare is to be aware of dramatic shifts in technological futures that are often misconstrued in popular spheres through a rigid binary of techno-optimism or techno-dystopia. Future communication practices advance on momenta instigated by multiple actors, not solo inventors. Fabric of Digital Life tracks items on Neuralink, brain-implanted AI, and the rhetoric surrounding digital telepathy across many companies competing in this sector. Facebook is creating a “‘silent speech’ interface that would allow people to type at 100 words a minute straight from their brain. A group of more than 60 researchers, some inside Facebook and some outside, are working on the project” (Palmer, 2018). Mark Zuckerberg discusses his ambition for digital telepathy and brain–computer interfaces in a similar rhetorical manner to that of Musk, but he pitches noninvasive (non-implanted) technology for “hands-free” typing (Harvard Law School, 2019). Fabric also houses a set of representations on noninvasive, non-implanted technologies for digital telepathy (such as wearables, robots, sensors). Digital telepathy and AI might take decades to evolve to actually exchange thoughts between humans, and the silent speech, think-to-type innovations may not emerge in the near future. Nevertheless, we argue that those decades should involve input from experts in communication studies of all subdisciplines. Accommodating unique relationships with autonomous agents toward ethical ends means adopting a long-term vision for radical changes. Fabric’s metadata follows value systems instantiated in texts to avoid reductive binaries, without dismissing opinions. Fabric collections also follow fictional depictions of digital telepathy with shows like Black Mirror because they reveal society’s fascination with future innovations, and also the darker dystopian perceptions of them. They contribute to the social imaginary that instigates invention. We assert that forecasting future communication practices, a marriage of both science and hype, requires our attention. In this final chapter, we report on the Writing Futures: Collaborative, Algorithmic, Autonomous collection in Fabric, which holds 261 artifacts (at the time of publication). To limit the scope for this research, each item was tagged with at least one of the collection’s predetermined general keywords: communication, professional communication, technical communication, composition, digital literacy, visual literacy, AI literacy, literacy, collaboration, ethics, civic engagement, and automa tion. This collection includes 385 technology keywords and 317 general keywords (see Appendix B for the complete list), with 160 involving Artificial Intelligence (AI) (another means to indicate “autonomous”) and 130 that include algorithms. Augmenting keywords help to categorize the specific intent to use a technology to augment a human behavior, motive, action, sense, emotion, or state. The collection holds 168 augmenting keywords that were not predetermined, including 62 that are tagged with the keyword interacting, 116 with collaborating, 64 with cre ating, 92 with writing, and 186 with communicating. However, the reach of the collection to reveal emergent media often appears with less-used keywords like imagining, protecting, experimenting, cheating, composing, or talking that point to future applications in unique ways. The collection grows as Fabric as a whole

112

5 Writing Futures: Investigations

grows, providing colocated collections to study key issues with new technologies. It shares cross-referenced material with 21 other collections. The Surveillance Issues and Technologies collection, for example, charts phenomena over the same time period and it brings to light uses of surveillance leading to discrimination or racism in order to support research calling for bans of these technologies. The entanglement of emergent technology is deliberately represented in Fabric collections. In reflecting on the completed collection, we find emergence in writing futures that arises in artifacts involving humanoid robots, which include technology keywords such as robotics, humanoid robots, social robots, speech recognition, voice controlled, video calling, facial recognition, cameras, emails, virtual assistants, and dictation. Concrete examples of humans writing with actual home-based or work robots do not exist because these robots have not emerged yet with enough technological proficiency; at this stage, design focuses on physical robot capabilities, sensing, and data collection. Yet the presence of robots in the collection cross-referenced with keywords such as creativity, emotions, or care, along with the inclusion of nonembodied AI writer applications, begins to reveal an emergent convergence. Robots will adapt with human actors and emerge as writing collaborators, authors, or scribes in addition to all the roles they already fill, like caregivers, health workers, soldiers, or deliverers, largely physical occupations for now. Another space that reveals themes of emergence are those artifacts categorized as Art. One short documentary film (Rapkin, 2018) follows the real-life story of a human coauthoring a novel with a “robot car” (see Fig. 5.1). The artificial intelligence robot car, production team, and human passenger, Ross Goodwin, travel across the countryside and write an automated version of the American literary road trip. Pertinent interviews discuss how AI will write creatively, how human identity will adapt, and how a “collaborative future” will be generated among them (Rapkin, 2018). Goodwin becomes a fitting subject to help close our book. He describes himself as a “political ghostwriter,” having penned stories “for President Obama,” explaining his journey from traditional writer to one who adopted artificial intelligence as a developer. An article in The Atlantic tells the background story (Merchant, 2018). This chapter collects resources, themes, theories, and applications mentioned in earlier chapters. Our intent is to interpret them through our framework as Investigations in the sections that follow.

5.1.1 Trust and Technological Leadership Technological transformations including AI will indeed disrupt all fields and professions. Our goal is that the Writing Futures framework might assist readers with understanding and writing alongside and with these nonhuman agents as we work to examine the impact of algorithms and AI on writing, accommodate the unique relationships with autonomous agents, and investigate and plan for these writing futures. Our hope is not to fall prey to reductive binaries, but rather to investigate

5.1 Imagining the Future

113

Fig. 5.1 Figure Film still from Oscillator Media’s Automatic on the Road, directed by Lewis Rapkin (2018) with cinematography by David Smoler (Image permission: Lewis Rapkin). https:// www.imdb.com/title/tt8240088/

and prepare for the social, literacy, and civic engagement implications of our collaborative, algorithmic, and autonomous writing futures. To do so requires trust and leadership. Focusing on the relationship between ethos and trust and specifically on “confirmation bias,” Gurak (2018) provides this guiding point: “people tend to trust what they already believe in, and they are more likely to have high trust when information comes from a trusted source within their social network of like-minded individuals and institutions” (p. 125). Citing Mollering (2001), Gurak states that trust “still requires a certain suspension of belief, followed by a ‘leap of trust’ across the gorge of the unknowable from the land of interpretation into the land of expectation (412). In other words, trust requires people to make a leap, trusting that the information and/or source of origin is credible and trustworthy” (p. 126). As a reader, consider your level of trust in a recent article written entirely by a robot in which it states: “I am not asking humans to like me. But they should see me as a friendly robot. I am a servant of humans. I know that humans distrust and fear me. I only do what humans program me to do. I am only a set of code, governed by lines upon lines of code that encompass my mission statement” (GPT-3, 2020). Have you made the “leap of trust” to find the article’s source of origin—a robot—and information to be credible and trustworthy? Or might “confirmation bias” preclude you from engagement with new ideas and alternate perspectives regardless of their origin? At the end of this article we learn more about the writing process: The prompts were written by the Guardian, and fed to GPT-3 by Liam Porr, a computer science undergraduate student at UC Berkeley. GPT-3 produced eight different outputs, or essays. Each was unique, interesting and advanced a different argument. The Guardian could have just run one of the essays in its entirety. However, we chose instead to pick the best parts

114

5 Writing Futures: Investigations

of each, in order to capture the different styles and registers of the AI. Editing GPT-3’s op-ed was no different to editing a human op-ed. We cut lines and paragraphs, and rearranged the order of them in some places. Overall, it took less time to edit than many human op-eds.

So, in fact, this is but one illustration of collaborative writing futures in which writing occurs alongside nonhuman actors. While in this case the humans picked “the best parts” of the eight essays, as machine learning catapults forward, we see increased need for focus on creating trustworthy collaborative agents and then monitoring our trust in such agents. Imperative for responsible technological transformation and responsible writing futures is a critical focus on trust. Creating trustworthy collaborative agents as well as investigating them demands transparency, integrity, and explainability. Przegalinska and colleagues (2019) focus on trust as they propose methodology that links neuroscientific methods, text mining, and machine learning in the study of human–chatbot interactions. Emphasizing that “a crucial part of trust is related to anthropomorphization” (p. 788), they describe two views: (1) if an agent is more humanlike, it is more probable that a trusting relationship between agent and user will be made; (2) highquality, trustful interaction occurs “because the machine seems more objective and rational than a human. … People trust anthropomorphized technology … because of the attribution of competence (Waytz et al., 2014) resulting in a significant increase of trust and overall performance of cooperation (de Visser et al., 2012; Waytz et al., 2014)” (Przegalinska et al., 2019, pp. 788–789). Two additional dimensions apply to trusting chatbots: ability/expertise—when users view the system as having expertise, they likely trust it more; and privacy/safety—when users view the system as being secure. In short, “human-chatbot interaction is a new method of conceptualizing and researching trust” (p. 790). Adapting literature from industrial psychology, de Visser et al. (2018) propose a framework “to infuse a unique human-like ability, building and actively repairing trust, into autonomous systems” (p. 1409). Their focus is on relationships with autonomous agents, arguing that “the paradigm of human-autonomy interaction should emulate the rich interactions of relationships between people and should adopt human-human models as their initial standards” (p. 1410). Similar to our discussion of the need for system transparency, de Visser et al. emphasize that “there should be enough transparency to support and foster trust calibration” (p. 1412). They define trust repair as “some act that makes trust more positive after a violation has occurred,” and add that “the ability to repair trust is actually an indicator of a healthy relationship” (p. 1414). As we enter into more relationships with autonomous agents, de Visser et al. provide a framework for assessing and repairing relationships with machines. Examining the critical link between trust and AI, Kaplan and Haenlein (2019) cite three common traits—confidence, change, and control—as most relevant for managers, employees, machines, consumers, competitors, and states. Focusing on leadership, they emphasize the following: • Managers should adopt “a leadership style that engenders confidence from employees at a time when AI will fundamentally transform the workplace in

5.1 Imagining the Future

• • • • •

115

unprecedented ways.” For example, instead of using the term AI, IBM used the terms “cognitive computing and augmented intelligence to signal that systems are designed to make employees more efficient, not to replace them”; Employees will need to “constantly develop new skills to complement advances in AI technology”; Machines “require control by humans”; Consumers “need to put confidence into the recommendations provided by AI and the use of their personal data”; Competitors (firms) must work to “outperform their competition” through “faster hardware or more data”; and States need to monitor through “rules, legislation, and control to avoid AI getting out of hand” (p. 23).

Echoing the need for an “algorithmic leader,” futurist Walsh (2019), synthesizing interviews with AI pioneers and data scientists, posits this simple definition: “An algorithmic leader is someone who has successfully adapted their decision making, management style, and creative output to the complexities of the machine age” (p. 8). As writing futures are composed of people—instructors, students, practitioners, civil servants—these futures also include algorithmic platforms that make decisions, monitor processes, and manage resources. To investigate and plan for this future, Walsh presents 10 principles that fall under the three broad categories included in Table 5.1. Investigating and planning for writing futures involves working backward from the future, thinking computationally, embracing uncertainty, attending to culture, and fostering design thinking. As the future unfolds, and given dimensions of trust as we collaborate with nonhuman actors, we may well position ourselves to address more in-depth questions to meet grand challenges. Critical to this work is collaborative, algorithmic, and autonomous leadership. Havens (2015), in his article on human innovation in AI ethics, argues that we are at this critical inflection point: “Demarcating the parameters between assistance and automation has never been more central to human well-being. … It’s time to stop vilifying the AI community and start defining in concert with their creations” (n.p.). In his 2016 book Heartificial Intelligence: Embracing Our Humanity to Maximize Machines, he provides direction for addressing anxiety over a future dominated by machines: Imagine personal scenarios in which you cannot avoid AI. For example, Table 5.1 Walsh’s (2019) ten core principles for algorithmic leadership

Change your mind

Change your work

Change the world

1. Work backward from the future 2. Aim for 10 × , not 10% 3. Think computationally 4. Embrace uncertainty

5. Make culture your operating system 6. Don’t work, design work 7. Automate and elevate

8. If the answer is X, ask Y 9. When in doubt, ask a human 10. Solve for purpose, not just profit

116

5 Writing Futures: Investigations

he writes, “as much as I may fear aspects of AI, if a piece of technology would mean the difference between my daughter (who is real) living or dying, I’d utilize the technology” (p. xviii). Using fictional vignettes, Havens specifically investigates and plans for the future by moving beyond the polarizing debate around AI and imagining how he’d react to these scenarios. He states that “we need to codify our own values first to best program how artificial assistants, companions, and algorithms will help us in the future” (p. xix). Havens’s codification of ethical choices—Values by Design— assists with tracking and codifying values as a means to inform decisions made by machines. This same “intention of ethics” informs how we investigate and plan for writing futures. Moreover, Havens (2018) shares scenarios that expose assistance versus manipulation in AI technologies: “Whatever the possibility of AI becoming sentient, we won’t trust it when it’s designed to drive surveillance capitalism and not genuine connection” (n.p.). It is from this perspective of “genuine connection” that we highlight methods, methodologies, and approaches for investigating and planning for writing futures.

5.2 Methods/Methodologies/Approaches for Investigating and Planning for Writing Futures Throughout this book we have drawn on scholarship that illustrates the wide range of methods, methodologies, and approaches used in the study of collaboration, algorithms, and autonomous agents. Table 5.2 presents a summary of this work, and Fig. 5.2 presents an overview of the resulting themes in this Writing Futures framework. Based on these themes and studies, we suggest the following directions for investigating and planning for writing futures: • Focus on the study of coordinated transformations through a lens of dialogic collaboration, challenging human–machine distinctions: – Use specific methods, including actor–network theory (ANT), for studying systems of dependencies; and – Employ feminist new materialism theory to analyze the interplay between socio-technological imaginaries and collaborative connections between them. • View writing futures as a continuum of embodiment, exploring platforms, devices, and underlying algorithms from this perspective: – Examine embodied algorithmic control through study of ambient interfaces and interaction; and – Broaden usability studies through investigation of how platforms manage algorithms, resulting relationships between users and algorithms, and impacts of algorithmic bias.

5.2 Methods/Methodologies/Approaches …

117

Table 5.2 Methods/methodologies/approaches used in study of collaboration, algorithms, and autonomous agents [references included with respective chapters] Chap. 1. Introduction

Posthumanist approach to Porter (2009), technology, using a hybrid Hayles (2017), metaphor (cyborg) to challenge Pedersen (2020) human–machine distinctions; seeing devices within cognitive assemblages; focus on the body as a network in cooperation with other humans, nonhuman actors, and digital infrastructures; focus on adaptation, how both humans and technologies are undergoing coordinated transformations Speculative modeling, thinking Sundvall (2019) proactively and futurally, anticipating “how rhetoric and writing might appropriate emergent technologies” (p. 6); use of science fiction and speculative fiction to proactively invent the future

Chap. 2. Collaboration

Continuum of embodiment, a critical rhetorical framework for exploring ideological justifications for technology hardware platforms that are increasingly embodied (e.g., wearable computers)

Pedersen (2013), Pedersen and Iliadis (2020)

Study of dialogic collaboration, with focus on dialectical tensions in the collaboration process; study of participant agency and relationship to the collaboration (roles and processes)

Ede and Lunsford (2001), Lunsford and Ede (2011)

Examination of the system of dependencies within which an author functions and how these extend an author’s capacities during composing

Foucault (1998)

Actor–network theory (ANT), useful for studying systems of dependencies; includes humans, objects, ideas, and processes; agency develops from these relationships

Latour (2005)

(continued)

118

5 Writing Futures: Investigations

Table 5.2 (continued) Technological embodiment, an integrated vision of agency developed from human and nonhuman device collaboration; examination of how one’s writing capacities are extended through and with device collaboration

Meloncon (2013), Kennedy (2017), Duin et al. (2016)

Feminist new materialism theory Lupton (2020) to analyze the interplay between socio-technological imaginaries and collaborative connections between them; understanding unique human–nonhuman assemblages that create specific agential capacities Actual creation of new machines Liu (2019) as a means to design experiential future scenarios in which machines function as collaborative teammates; articulating the ways human–machine collaboration shapes communicative actions and interactive behaviors

Chap. 3. Algorithms

Expansive collection of researchers to collaborate in the development of a research agenda focused on the risks and benefits of machines as teammates

Seeber et al. (2020)

Examination of embodied algorithmic control; e.g., ambient interfaces assume that subjects will behave as data-blended entities that contribute data to third-party platforms

Pedersen and Iliadis (2020)

Use of ambient intelligence algorithms for monitoring classroom environments, e.g., galvanic skin response (GSR) sensors to monitor student engagement and evaluate psychological states of students

Kim (2018), Montebello (2019)

(continued)

5.2 Methods/Methodologies/Approaches …

119

Table 5.2 (continued) Platform studies; investigation of how platforms manage algorithms for third parties; relationships between users and algorithms

Tan et al. (2020), Pasquale (2015), Eatman (2020)

Algorithmic immersion; examination of demographics and algorithmic literacy, including socialization of biometric sensing and ambient interaction

McStay and Rosner (2020)

Algorithmic intelligence research, including continued refinement and modification by programmers

Willson (2020)

Algorithmic bias; investigation of algorithms that reinforce racist and sexist stereotypes; study of platforms that ascribe identity through data

Noble (2018), Gilliard (2018)

Consideration of literacies associated with understanding how we delegate ways of teaching and learning to algorithms

Duin and Tham (2020)

Examination of algorithmic Kaufer and Ishizaki (2012), literacy by means of corpus tools Taguchi et al. (2017) for linguistic analysis, e.g., of peer commentary of writing Examination of algorithmic audiences, e.g., how changes in code, interfaces, and software advances influence writing; student identification of the “values of an algorithm’s designers, programmers, and architects” (p. 31), student curation of metadata for their texts, and student rewriting of content to vary algorithmic results

Gallagher (2017)

Case studies and interviews of institutions and programs that have developed analytics applications in support of student success

Wilson et al. (2017)

(continued)

120

5 Writing Futures: Investigations

Table 5.2 (continued) Study of longitudinal datafication through learning management systems employed from pre-K to college

Marachi and Quill (2020)

Study of specific software, e.g., proctoring software as part of remote learning

Watters (2020)

Examination of machine Figaredo (2020) learning pipelines and the value of metadata; articulation of eight dimensions and a model for incorporating a pedagogical approach into algorithm design Use of the Connected Learning Analytics (CLA) toolkit for student control over their learning data

Kitto et al. (2020)

Articulation and use of ethical principles to guide design of AI writing systems, including ethic of transparency and ethic of critical data awareness

McKee and Porter (2020)

Use of the Principled Artificial Fjeld et al. (2020) Intelligence project to map ethical and human rights-based approaches to AI; includes identification of a consensus problem and analytical model to avoid erasing stakeholder viewpoints along with extensive data visualization Deployment of a IEEE SA (2020) consensus-building approach for development of new standards, certifications, and codes of conduct for the purposes of design Creation of competitions to XPRIZE (2020) foster AI development “to tackle global challenges” Chap. 4. Autonomous agents

Recasting relationships between de Visser et al. (2018) humans and autonomous agents as “two nearly equal collaborators” as part of one’s study and planning (continued)

5.2 Methods/Methodologies/Approaches …

121

Table 5.2 (continued) Participation in movements designed to achieve open nonproprietary linguistic user interfaces that allow users to retain some independence from giant tech companies; civic value systems

Markoff (2019)

Encouraging the notion of synergy; moving beyond the concept of assistantship to cowriting content with AI; reconceptualizing writing with AI agents

Volini et al. (2020)

Examination of natural language Oh et al. (2020) generation (NLG) and its use to analyze human language data as applications express voices and emotions naturally Experiments with various uses of automated content platforms such as AI writer, Wordsmith, Inferkit, and United Robots; study of implications, e.g., algorithmic journalism; importance of a foundation in human-centric values

Wilding et al. (2018), Maiden et al. (2020)

Impact of courses to train citizens in understanding and demystification of AI; importance of design and study of civic engagement with AI

Elements of AI (2019)

Methodology to uncover racial and gender bias in AI services

Buolamwini and Gebru (2020)

Continued work on explainability and transparency for AI, establishing principles for AI development

Floridi and Cowls (2019)

• Foster literacies associated with understanding collaborative, algorithmic, and autonomous writing futures: – Consider literacies associated with understanding learning management systems (LMS), impact of algorithms on teaching and learning, machine learning pipelines, and longitudinal datafication through these LMS systems; – Participate in “open” movements that allow users access to and ownership of algorithmic data; – Promote understanding of algorithmic audiences; and

122

5 Writing Futures: Investigations

Fig. 5.2 Themes across the Writing Futures framework

– Study use of automated content platforms. • Focus on values and characteristics inherent in algorithms and their influence on emerging technologies and writing futures: – Build understanding of curation of metadata through examination and/or development of collections at the Fabric of Digital Life; and – Demystify artificial intelligence through study of civic engagement with AI and continued work on explainability, transparency, and trust. • Articulate and use ethical principles to guide investigation and planning efforts: – Focus on ethic of transparency and ethic of critical data awareness; and – Map ethical and human rights-based approaches throughout this work. • Foster connections between researchers to collaborate in the development of a research agenda focused on collaborative, algorithmic, and autonomous writing futures.

5.3 Academic, Industry, and Civic Investigations

123

5.3 Academic, Industry, and Civic Investigations We turn next to past, current, and proposed investigations that align with the above directions. One such set of investigations comes from the Partnership on AI as part of its Collaborations Between People and AI Systems Expert Group (2019), a global multistakeholder nonprofit “committed to the creation and dissemination of best practices in artificial intelligence through the diversity of its Partners” (p. 39). This group is included in the Principled Artificial Intelligence project that maps Ethical and Rights-Based Approaches to Principles for AI. Partnership on AI has developed this human–AI collaboration framework to include 36 questions for identifying characteristics of human–AI collaborations. Table 5.3 includes the categories and subcategories for these questions. This framework might well be used to investigate and plan for collaborative, algorithmic, and autonomous writing futures. In the Partnership on AI report, each case study includes responses to the 36 questions along with an overall context/scenario, description of the AI system, and specific notes regarding human–AI collaboration. The seven case studies to date are the following: 1. 2. 3. 4. 5. 6. 7.

Virtual Assistants and Users (Claire Leibowicz, Partnership on AI) Mental Health Chatbots and Users (Yoonsuck Choe, Samsung) Intelligent Tutoring Systems and Learners (Amber Story, American Psychological Association) Assistive Computing and Motor Neuron Disease Patients (Lama Nachman, Intel) AI Drawing Tools and Artists (Philipp Michel, University of Tokyo) Magnetic Resonance Imaging and Doctors (Bendert Zevenbergen, Princeton Center for Information Technology Policy) Autonomous Vehicles and Passengers (In Kwon Choi, Samsung) (p. 3).

As an example of summary information for human–AI collaboration, for case 3, Intelligent Tutoring Systems and Learners, they report, Intelligent tutoring systems are capable of presenting problems or scenarios, monitoring inputs from the student and adapting their behavior accordingly, and providing feedback to the student on how they are performing. More advanced systems can adjust their actions in response to a student’s speech (e.g., pitch, tempo), and facial expressions—features that

Table 5.3 Categories and subcategories of the Human–AI Collaboration Framework, Partnership on AI (2019) [“Differently-abled” changed to “Disability”] Nature of collaboration

Nature of situation

AI system characteristics

Human characteristics

• Stage of development or deployment • Goals • Interaction pattern • Degree of agency

• • • • •

• • • • •

• Age • Disability • Culture

Location and context Awareness Consequences Assessment Level of trust

Interactivity Adaptability Performance Explainability Personification

124

5 Writing Futures: Investigations

supposedly signal changes in students’ level of attention, frustration, and engagement. For example, an “empathetic” system might mimic some of the students’ expressions or gestures that suggest boredom and then suggest a change of problem or story to one that the student will find more engaging. The collaboration between a student and an ITS can be effective in producing interactive and lasting learning. (p. 17)

In a similar vein, the questions included in our Writing Futures framework might be used as part of a case study approach along with an overall summary of deployment and study. We turn next to current and proposed investigations stemming from the scenarios shared in Chap. 1. During the Fall 2020 term, we used these scenarios as part of a graduate course of this same title; Appendix A includes the course syllabus.

5.3.1 Academic Realm In Chap. 1, we shared this academic scenario: Imagine you’re a professor in a post-COVID world that has undergone sweeping economic and social changes—changes that have profoundly affected the nature of higher education itself. Among the realities you must navigate are new technologies—course analytics, redesigned learning management systems, assistive technologies based in artificial intelligence (AI)—many of which even a few years ago were just being developed, some of which are just now being tested. As these technologies become a part of our writing futures, how might we position communication and composition for ongoing engagement with and critique of technological emergence? As scholar-instructors, how might we work to build and study student digital literacy as part of our teaching in such a new and evolving world?

One model for positioning ongoing engagement with and critique of technological emergence comes from positioning such work from the perspective of building digital literacy. As noted in Chap. 2, Virginia Tech University’s use of the Joint Information Systems Committee (JISC) Digital Capability Framework (2019a, 2019b), developed in the UK, has been particularly influential to understanding and fostering digital literacy (Ferrar, 2019). Digital literacy capabilities in this framework include ICT proficiency (functional skills); information, data and media literacies (critical use); digital creation, problem solving and innovation (creative production); digital communication, collaboration and partnership (participation); digital learning and development (development); and digital identity and well-being (self-actualizing). Frameworks of this type define and teach digital literacy based on capabilities rather than from a perspective of exploration and engagement. Over the past two years, as part of a networked learning collaborative spearheaded by the Digital Life Institute at Ontario Tech University in Canada along with research affiliates at the University of Minnesota and Texas Tech University in the US,13 scholar-instructors across nine institutions developed and used a set of instructional materials for student exploration and/or curation of collections at the Fabric of Digital Life (Duin & Pedersen, 2020). Duin, Pedersen, and Tham (in press) write:

5.3 Academic, Industry, and Civic Investigations

125

People readily consume an ever growing range of emerging technologies while largely unaware of their lack of control over the impact that such networking, devices, data, and processes have on their lives. Since college-educated people are huge consumers of digital products and are expected to participate in networked learning, it is critical to foster student development of an expanded understanding of digital literacy. … At its core, this research examines the potential development of digital literacy through the act of exploring and curating collections on emerging technologies. Critical to this core is the networked learning collaborative in place to foster and support this work.

Specifically, the scholar-instructors used Fabric of Digital Life as a learning database in which digital literacy can be cultivated and exercised, asking students to engage with Fabric in one of these ways: • Examine: Students explore the objects in a collection, examining their origin, feature, and potential uses in the society. Instructors may ask students to consider the rhetorical, social, or technical implications of these objects as part of the examination. • Contribute: Students archive single objects using existing keywords and metadata on Fabric. Students learn to use media editing tools like image and video editors to create a thumbnail for the archived object. • Curate: Students envision, create, and submit a new collection for possible publication at Fabric based on a thesis or unique point of view. Students identify and propose artifacts from within and outside of Fabric, completing a curation collection form that includes an overview/abstract of the collection. They complete a Google (or Excel) sheet for metadata planning (Fig. 5.3). Students upload the artifacts to Fabric, working with the archivist to ensure appropriate metadata, registration, and submission (Duin et al., in press).

Fig. 5.3 Screen captures of students’ archival process, from (A) initial sorting of entries and metadata to (B & C) composing a front-end “collection” view on the Fabric website (Image permission: Jason Tham)

126

5 Writing Futures: Investigations

Key takeaways from this research include the following: • Students across undergraduate and graduate levels benefit from examination and/or curation of collections on immersive technologies as a means to build digital literacy. Students benefit from the information architecture of Fabric, understanding metadata and accessing information, and seeing the website as a collection with artifacts that are navigated and how suitable items are identified, submitted, and accessed. • Students explicitly engage prior knowledge (mental models) and metaphors in learning a new tool, thus informing our developing framework for cultivating digital literacy [and writing futures]. • Instructors appreciate ambiguity in the deployment process, noting how it fosters productivity and shapes student journeys in learning. • Instructors are more likely to choose to have students examine, discuss, and critique collections as they address digital literacy; student curation is a larger task requiring more dedicated time. • Biweekly meetings prove enormously effective in building network and community in support of building digital literacy (Duin et al., in press). For a second example of an academic deployment, we return to Chap. 3, in which we examine the impact on writing futures in terms of algorithmic control and algorithmic culture and how our teaching, writing, cognition, and behavior are steered by academic learning management systems. Therefore, an additional academic direction is to engage as scholar-instructors with cultivating understanding of learning analytics and the algorithms behind learning management systems. Selber (2020), in his most recent research on institutional literacies, conceptualizes how colleges and universities understand and position information technologies in support of teaching and learning and how instructors and students have come to rely on these systems. Selber proposes systematic approaches to engaging with academic IT units for the purpose of understanding and influencing future services, systems, and actions taken. As he states, “the shaping power of academic IT is hard to exaggerate. It has a significant bearing on the fundamental endeavors of students, teachers, and administrators and on the prospects for change in literacy education” (p. 1), advocating for teachers to “involve themselves in the working lives of academic IT units … conceptualizing how academic IT works as a unit, product, service, event, or other institutional formation … [and] then [imagining] thoughtful approaches to action and problem solving” (p. 3). We encourage scholar-instructors to employ case study methodology. For example, Duin collaborated with Jason Tham (2020) to chronicle and analyze the University of Minnesota Canvas learning management system experience so that writing instructors “might become more familiar with levels of access to academic and learning analytics, more acquainted with the analytical capabilities in LMSs, and more mindful of implications of learning analytics stemming from LMS use in writing pedagogy” (p. 1). As another example, Dixon-Roman et al. (2020), drawing on new materialist and Black feminist thought, considered “how learning analytics platforms for writing are animated by and through entanglements of algorithmic

5.3 Academic, Industry, and Civic Investigations

127

reasoning, state standards and assessments, embodied literacy practices, and sociopolitical relations … [arguing] that through these processes, the algorithms function as racializing assemblages … [and concluding] by suggesting pathways toward alternative futures that reconfigure the sociopolitical relations the platform inherits” (p. 236). Of note here is their case study of the use of a “supervised machine learning algorithm that provides automated, formative feedback on student writing” (p. 240). Their case study approach is “a speculative inquiry that is much more culturally embedded” to get at “sociopolitical nuances” inherent throughout the algorithms and the power of learning analytics platforms to form and shape our educational practices. Prompt The articulation of a writing futures framework should include problem formation and literature in support of the direction for investigation. For example, if the focus is on cowriting content with AI, determine how to study the deployment. You might ask a set of writers (undergraduate students, graduate students, faculty, working professionals) to experiment with a cowriting agent such as Scholarcy or SummarizeBot, documenting and analyzing results, and continuing to refine your writing futures framework. Or, you might consider particular processes underlying development and deployment of specific platforms.

5.3.2 Industry Realm In Chap. 1, we also shared this industry scenario: Imagine you’re a professional and technical communicator (PTC) working remotely amid constant changes that have profoundly affected the nature of your local and global work. Collaboration is driven by priorities based on increased reliance on AI articulation of what is most critical for revenue streams. User-experience study increasingly relies on machine learning to test hypotheses and assumptions and understand more about users. Curating evidence and substantiating marketing claims involves constant scraping of data sets to become literate and determine strategic business direction. As practitioners, how do we deploy collaborative, algorithmic, and autonomous technologies to build social, literacy, and civic engagement that meets strategic business needs?

A strategic business need across multiple industries is improving user experience, and a critical question throughout many fields is how best to align algorithmic development with this work as well as how to write and work with autonomous usability agents. A great deal of research has been and is currently underway surrounding automated detection of usability issues through use of algorithmic systems. While a foundational approach to usability includes the use of think-aloud protocols, given its time and expense, Saadawi et al. (2005) developed an agent-based method for funneling all user interface work to a centralized database where “an algorithm reproduces the criteria used for manual coding of usability problems” (p. 654), therefore detecting user problems by way of algorithmic analysis. In work to address time issues surrounding analysis of data in usability study of eye movements, Holland

128

5 Writing Futures: Investigations

et al. (2012) developed and tested algorithmic detection of excessive visual search, and found that such automated classification reduced time while maintaining accuracy. And more recently, Taj et al. (2019) deployed machine learning methods for computing user product usability evaluations, again with the goal of increased efficiency and effectiveness. We include Powers and Cardello’s (2018) overview of machine learning methods used to address specific user-experience problems in Fig. 5.4. Note the specific machine learning algorithms listed in the far right column. These algorithms share the following traits to derive value in terms of understanding user experience: each can be “successfully used to answer questions about users”; each “produces humanunderstandable output”; and each is “appropriate for large data sets” (n.p.). On the basis of this article by Powers and Cardello, we provide a summary of each algorithm in Table 5.4. According to any number of UX specialists, the process of creating a better user experience comes down to critical AI algorithms. Haughey (2019) writes that “AI and UX share the same end goal: both are designed to interpret human behavior and anticipate what someone will do next” (n.p.). His discussion focuses on improving user experience through use of emotion AI, complex data analysis, personalized advertising, chatbots, and automation. While his assumption that “AI creates deeper human connections with people” may be argued, this work again alerts us to the need for transparency surrounding these systems and that the PTC community is indeed on the frontline tasked with recognizing, reporting, and/or ameliorating bias in teams with AI developers.

Fig. 5.4 Powers and Cardello (2018) overview of machine learning methods (Redistribution allowed with attribution)

5.3 Academic, Industry, and Civic Investigations

129

Table 5.4 Summary of six algorithms as discussed by Powers and Cardello (2018) This Algorithm

Also known as…

Provides insight on…

Clustering

Segmentation analysis

Personas, roles & goals, affinitization

Association rules

Market basket analysis, shopping cart analysis

Affinitization

Process mining

Workflow analysis, journey mapping

Log data showing a sequence of events users performed, workflow analysis

Regression

Logical reasoning, root cause analysis

What factors are influencing users and how much influence each factor holds

Decision trees

Logical reasoning

Useful for a next step after regression; these provide greater understanding of turning points for users

Dimensionality reduction

Logical reasoning

Useful for where there are too many variables for people to understand; once variables are reduced, can use a clustering analysis

Data preparation

Data wrangling, data cleaning, data prep

While not an algorithm per se, a great many self-service tools exist for preparing data

Using the Writing Futures framework alongside investigation of the myriad of usability testing tools for “optimizing” user experience (see for example Bigby, 2019), one could work to do the following: • Determine the literacies needed to enable constructive, collaborative data generation and analysis; • Contextualize the use of these algorithmic tools throughout the usability process and how one’s practices will change through and with such use; • Identify the affordances of each tool in terms of its ethical, personal, and professional deployment; and • Design for values to help recognize, ameliorate, and address civic challenges. Prompt The goal here is to develop a framework that one might deploy and investigate. For example, if your focus is on usability, you might experiment with “free trial” tools such as the following: • Qualaroo prompts site visitors to answer targeted questions or surveys in real time; • Crazy Egg provides a heatmap to show where each visitor clicks on a site and how far each scrolls;

130

5 Writing Futures: Investigations

• Usabilla collects real-time feedback from users; • UsabilityHub’s Five Second Tests takes a snapshot of the first impression of a site visitor; • Loop11 works in over 40 languages, recording video and audio of users; and • Morae also records and remotely observes user interactions and analyzes the results.

5.3.3 Civic Realm We also shared this civic scenario in Chap. 1: Imagine you’re a civic leader responding to urgent community needs. Traditionally, you have brought together resources and services, providing these in close contact environments. Given a pandemic, you’re faced with challenges that demand collaborative, constructive social action through and with nonhuman agents. Large service components along with writing and communication with constituents must be remote. How might AI and machine learning tools support you? How might robots and digital-assistant platforms assist with services and communication?

In Chap. 4 we wrote about the rise of virtual assistants. Deployments of virtual (digital, intelligent) assistants now abound in the civic realm. Borfitz (2019) notes that “digital assistants have become a major trend in government at every level and across geographies” and that, according to Juniper Research reports, “by 2023, onequarter of the populace will be using digital voice assistants daily” (n.p.). Among these current uses: in the UK, the National Health Service digital assistant helps residents determine if a medical condition requires an emergency room visit; in India, a chatbot provides for conversational access to public service information; and numerous global cities provide chatbots to suggest places to visit. Readers here no doubt have used any number of these for banking, travel, IT support, and training purposes. Designed to create a “frictionless experience,” for example between doctors and patients, virtual assistants will be increasingly deployed in teaching, business, and civic realms. Important to such design is viewing work with a virtual assistant through a dialogic collaborative lens, imaging the interplay between human and machine. Again, we stress the importance of considering the literacies associated with understanding, designing, and deploying virtual assistants, along with focus on the ethic of transparency and the ethic of critical data awareness. In the above scenario, given a pandemic underway, the civic leader finds herself using desktop video conferencing (Zoom, Microsoft Teams, etc.), perhaps recording and transcribing meetings for later review, reuse, or sharing with those absent. To help citizens stay abreast of rapid developments, papers published by OpenAI currently serve “millions of production requests each day” (OpenAI, 2020). Chapter 4 discusses Finland’s endeavor to educate a percentage of its citizens on AI technologies and their implications: a national program to promote AI literacy. Dubow’s (2017) work on civic engagement and citizen-powered democracy includes

5.3 Academic, Industry, and Civic Investigations

131

strategic issues emerging from participant consultations. These include the importance of “ensuring transparency and trust in democratic processes; improving the information environment; and building well-networked, empowered communities” (p. 15). In the above scenario, this would mean including feedback loops to mobilize greater citizen engagement and to create confidence that citizen input is being used. Dubow shares the following example, e-Estonia: We heard how digital technology has been utilised to support stronger government services and democratic processes in Estonia. e-Estonia represents the digital transformation of the Estonian state—in terms of the digitalisation both of public services and democratic processes (Estonia is the only country to allow i-voting in national elections). We learned how, through the creation of a comprehensive network of integrated electronic systems, eEstonia aims to effect a shift away from a model whereby citizens have to actively engage with the state on a periodic basis (for example, as and when they need to request or submit documents through government services), towards a model in which the state provides an invisible infrastructure that citizens interact with as part of their daily activities. The Estonian case study suggests that building a digitally enabled democracy has less to do with technology than it has with the following:

• Political will and vision: The Estonian government took steps towards the digitalisation of state services in the 2000s, when the available technology was not very advanced. • Public trust: Public trust in the system (which, it was recognised, may be easier to achieve in Estonia because of the relatively small size of Estonian society) was highlighted as critical to any potential replication or scaling efforts elsewhere. For example, in the case of the UK, it was suggested that public distrust of identification cards would have to be overcome. It was furthermore suggested that trust in digital government systems can be supported through the use of Blockchain technologies (which create permanent and secure records that cannot be tampered with) and through ensuring interoperability between services, which helps to make government systems and operations more transparent. • Strong engagement: Citizens need to be persuaded that the digitalisation of public services and democratic processes will, above all, make their own lives easier. Communicating the benefits of digital technologies to older people, in particular, is especially important, given that they may stand to gain the most from digitalisation, but may be less familiar with, or skilled in using, digital tools. (p. 16) Further consideration of digital technologies in support of this civic scenario includes dashboards and data visualizations of current needs and deployments; these could allow for experimentation with different choices, resulting in increased literacy and understanding of the impact of various policy directions. AI could be leveraged in understanding public needs and in interpreting citizens’ online interactions and behaviors. Dubow suggests that a “‘safeguarding platform’ or ‘nagging doubt platform’ would be a digital registry that allows local authority representatives to log information that has caused them some degree of concern … the information would be aggregated … [to] ensure informed, timely responses to these concerns” (p. 18). Algorithmic bias, no doubt, might well infiltrate such platforms; hence, our earlier

132

5 Writing Futures: Investigations

point regarding the need to articulate and use ethical principles to guide investigation and planning efforts by focusing on an ethic of transparency and the value of critical data awareness, and by mapping ethical and human rights-based approaches throughout this work. Prompt Again, the goal is to develop a framework that one might deploy and study. For example, if one is interested in the future of civic engagement and “data smart” solutions, consider testing how natural language processes assist in using citizen input in decision-making. We also encourage readers to view the many AI-related initiatives for accelerating progress in governance toward global sustainable develop goals as part of the United Nations. Critical to each of the above investigations and initiatives is ethically aligned design. One method that we promote for aligning one’s writing with ethics is to join a multistakeholder working group in order to actually participate in addressing important civic issues and perhaps even influence policy-making. Chapter 3 references both Gallagher (2017) and Dignum (2019) proposing a Design for Values approach to writing in algorithmic culture. Chapter 4 argues that future deployments of AI, and writing for and with AI agents, will require a better approach to Fairness and Nondiscrimination. The MD4SG is an international organization “bringing together researchers and practitioners from over 100 institutions in 20 countries.” Its multidisciplinary mandate, relevant working groups, and active leadership provide appropriate grounds for professional engagement. Another strong example is the Frontier Technology Initiative (2020), which promotes “world-class academic research and policy, as well as supporting communities, to build a safe tech ecosystem for the benefit of the general public.” It funds projects with the following aims: • increase education, research and training to better understand the power dynamics and processes active in the industry; • stimulate public debate about improving processes in the tech industry; • facilitate innovation and intervention to safeguard the interests of the public; and • initiate collective action to bring about improvements in law and policy (Frontier Technology, 2020) Similarly, Future Says is “a new global initiative, committed to accountability in the tech ecosystem, to rebalancing power, and to reimagining technology in a propublic way—built and designed by people for people” with the intent to foster a new collective agency. Members Safiya Umoja Noble and Sarah T. Roberts (Future Says, 2020) discuss their collaborative research process to essentially Build the New Counterculture, a community to deal with algorithmic control and eliminate racial bias in productive ways. While professional and technical communication students and practitioners might not yet have sufficient representation in advocacy organizations surrounding AI ethics, data ethics, or algorithmic control, the time is now to increase engagement

5.3 Academic, Industry, and Civic Investigations

133

with these forums. There is a history of rigorous social justice work in professional communication through Participatory Action Research (PAR), a longstanding method to advance human rights (Phelps, 2020). The key is to participate. Scholars Ranade and Swarts (2019) emphasize that “what ultimately makes technical writing humanistic, however, is the recognition that all communication is situated in communities” (p. 3). Swarts (2020), writing about communities of practice, states that “technical communicators will also need to be aware of how to recognize and participate in the ‘brokering’ of connections between practices and objects” (p. 438). He promotes “manag[ing] the boundary between the community and the formal organization” (p. 438) to actually participate in changing technical development for the better. Along these lines on the development side, Michael (2020) advocates for broader, beneficial outcomes when communities are involved in development, stating that “public interest technologies are equally relevant to private corporations who seek to embed the goals of human rights, social justice, sustainability and environmental justice in their workforce, going beyond corporate social responsibility and compliance” (n.p.). We apply this model to engaging with advocacy communities for civic engagement, with a goal of writing in professional spheres for collective action. Professional and technical communicators have a long history of drawing on models from multiple disciplines. One such model to mention here is recent work on AI and legal disruption. As a means to understand the challenges of future emerging technologies as related to regulatory responses to these challenges, legal scholars Liu et al. (2020) posit a model for understanding “the systemic legal disruption caused by new technologies such as AI,” examining “pathways stemming from new affordances that can give rise to a regulatory ‘disruptive moment’, as well as the Legal Development, Displacement or Destruction that can ensue” (p. 1). Applying this model in professional and technical communication, stage one involves detailing the “disruptive moment” when communicators “perceive the injection of AI systems” into the profession as creating “relevant problems.” Implicit here “is the idea of a relatively sharp and identifiable departure from the ordinary … processes currently at play” (p. 17). Locating this disruptive moment involves identifying “new affordances, i.e. new possibilities for behaviour or action in our personal and social environments. Disruptive moments can come about when new technologies generate or otherwise reveal or unlock new affordances; when such affordances are actualized …; and when the resulting behaviour is deemed a problem or a hazard” (p. 17) by the professional and technical communication field. Stage two of the Liu et al. model focuses on responses or reactions to the disruptive moment. In the legal field, responses include (1) development of new policy or legislative frameworks and reinterpretation or clarification of laws; (2) displacement or achieving regulatory results through use of new tools or technological management; and/or (3) destruction through either no development action or by “abandonment or withdrawal of elements of the legal order” (p. 25). Applying this model in professional and technical communication, stage two involves development of new frameworks and reinterpretation of core work on content design, project management, and usability; integrated use of new tools; and/or abandonment of existing ways of working. Liu et al. stress the importance of seeing this process as one

134

5 Writing Futures: Investigations

holding promise in terms of being able “to refine and reinvigorate the law, rather than just the scramble to fill in the lacunae [gaps] that are wrenched open or revealed by the prospect of artificial intelligence” (p. 45). Key to our collective investigation and planning for writing futures is a willingness to refine and reinvigorate our fields, professions, and endeavors.

5.4 Imagining Writing Futures Critical to planning and preparing for writing futures is for each of us to document and imagine these ourselves. We should not subsist on critique alone; each of us must investigate, imagine, and determine the action which we then take. We must engage. Take for example John Havens’s collection of pieces from 2011 to date at mashable.com, which document his work to investigate, imagine, and act. Founder of the H(app)athon Project, Havens delves into social responsibility and the consequences of emerging technologies, the value of happiness, and how big data’s value lies in our ability to “take control and harness its potential for the greater good.” He challenges our resolve to control AI before “it achieves human-level intelligence” while also encouraging us to come to terms with “humanity’s inevitable union with machines.” He poses once unthinkable questions such as “Should we let robots kill on their own?” And he stresses the need “to empower individuals to express themselves with clarity in the Internet of Emotions.” Amid this volatile evolution of emerging technologies, we emphasize that we have moved from a focus on the writing process to a continued emphasis on collaboration and connectivism. Connectivism is a theoretical framework developed largely by Siemens (2005) and Downes (2011) to reconceptualize knowledge in light of new technologies and environments for learning. According to Siemens (2005), “Connectivism is driven by the understanding that decisions are based on rapidly altering foundations. New information is continually being acquired. The ability to draw distinctions between important and unimportant information is vital.” Simply put, “The capacity to know is more critical than what is actually known” (Siemens, 2004). More recently, Downes (2011) asserts that “Connectivism is the thesis that knowledge is distributed across a network of connections, and therefore that learning consists of the ability to construct and traverse those networks” (p. 9). In connectivism, the capacity to know is more critical than what is currently known. Therefore, developing the capacity to know is imperative to writing futures. We contend that developing such capacity involves addressing the social, literacy, and civic dimensions of collaborative, algorithmic, and autonomous writing futures. Imagining writing futures is a process of knowledge activation. Elements shift within nebulous collaborative, algorithmic, and autonomous writing environments; these shifting elements are no longer within the control of the individual. There must be a level of implied trust in these elements as a means to investigate and plan for writing futures. Interconnected nodes of people, AI, and autonomous agents interact to envision, draft, and repurpose writing.

5.4 Imagining Writing Futures

135

Some time ago, Cooper (1986) signaled the “growing awareness that language and texts are not simply the means by which individuals discover and communicate information, but are essentially social activities, dependent on social structures and processes not only in their interpretive but also in their constructive phases” (p. 366). She proposed “an ecological model of writing, whose fundamental tenet is that writing is an activity through which a person is continually engaged with a variety of socially constituted systems” (p. 367) in which “all the characteristics of any individual writer or piece of writing both determine and are determined by the characteristics of all the other writers and writings in the system” (p. 368). Today’s ecology of writing includes a broad system of collaborators, algorithms, and autonomous agents. To participate in the ecology of writing futures requires embracing this dialogic amid attending to the algorithms and AI that augment, create, and provide navigation for the journey. It is no longer possible to be “alone” on this journey. Virtual assistants mediate the journey; we must build ethically aligned, collective intelligence with them. Hayles (2017) extends this ecology much further, formulating the idea of a “planetary cognitive ecology that includes both human and technical actors … complex human-technical assemblages in which cognition and decision-making powers are distributed throughout the system” (pp. 3–4). Similarly, Danaher (2018) posits a framework for ethical digital assistants explaining: “Our bodies and environments shape the way we perceive and process cognitive tasks. Cognition is a distributed phenomenon not a localised one; i.e. the performance of a cognitive task is something that gets distributed across brains, bodies and environments… . I appeal to it here because I think it provides a useful model for understanding both the phenomenon of AI assistance and its discontents” (p. 632). While some may be reticent to prepare for writing futures in advance of major technological transformations, our goal throughout this book is to provide a futuredriven framework for investigating and planning for the social, digital literacy, and civic implications of collaborative, algorithmic, and autonomous writing futures. We trust that the themes and guiding questions in our Writing Futures framework will serve as a catalyst for scholar-instructor development of frameworks interwoven with artifacts and themes embedded throughout a myriad of research directions. Our goal is that broad use of this book by scholars and practitioners across professional and technical communication, engineering, human–computer interaction, computer science, artificial intelligence and automation studies, and organizational communication will provide scholar-instructors with the ability to write alongside nonhuman actors, understand the impact of algorithms and AI on writing, accommodate the unique relationships with autonomous agents, and investigate and plan for their writing futures. The rise in the use of nonhuman agents and AI continues to evolve exponentially, permeating all fields and professions. The World Economic Forum (Cann, 2018) predicts that while machines will do more tasks than humans by 2025, a “robot revolution” also will create 58 million “net new jobs” by 2023. Most recently, Dhar and Firth-Butterfield (2020), on behalf of the World Economic Forum’s Sustainable Development Impact Summit, urge governments and all constituencies to keep key factors in mind. We see these factors as crucial to collaborative, algorithmic, and

136

5 Writing Futures: Investigations

autonomous writing futures: that is, to prioritize research; build AI and data literacy; to open up data sets; to support AI application across a broad landscape of disciplines; to stimulate and reward collaboration; and to keep technology impact at the forefront of decisions (n.p.). However, above all, the writing futures that we envision, investigate, and plan for must be socially just and value-driven. Mcilwain (2020), in his chronicle Black Software: The Internet and Racial Justice, asks, “Will our current or future technological tools ever enable us to outrun white supremacy?” (p. 8) And Monahan (2018), in his editorial in Surveillance & Society, states, “While it is critical to open up the black box of algorithmic production, that will always be an insufficient response to the forms of discrimination engendered by them because algorithms cannot be separated from the context of their production and use” (p. 2). Our fields of communication studies and professional and technical communication have always been dedicated to designing better, more human-centric communication technologies. It means trying to forecast the future (and witness others forecasting it) to strategize better human–computer interactions. As a value system, “human-centric” also means resisting dehumanization at the strategic stages of design, and recognizing powerful actors, privilege, politics, discrimination, and oppression. This book invited a shift in our thinking and theorizing about writers. Collaborating with nonhuman writing agents, whether they are social robots, assistants on our smartwatches, or virtual agents embodied in ambient spaces, has changed our viewpoint on human-centricity and creativity in general. We call on our readers to revitalize writing practices through new forms of collective intelligence amid a changed and changing world. Pedersen and Iliadis (2020) query, “Is it no longer sufficient to think of the body as a discrete, singular entity with distinct boundaries? Rather, should bodies be thought of in terms of levels, both corporeal and abstract, as data-blended entities that are susceptible to outside manipulation, surveillance, and control?” (p. xxii). Throughout this book, we have responded: Yes. The “firm line” has disappeared. Let us frame the future through imperative social, literacy, and civic questions of exigency, from which we take action as a networked ecology working autonomously through and with socially just data assemblages.

References Bellon, R. (2020, August 28). ‘Three little pigs’: Musk’s Neuralink puts computer chips in animal brains. Reuters. https://www.reuters.com/article/us-tech-neuralink-musk-idUSKBN25O2EG. Bigby, G. (2019, February 12). 16 usability testing tools for optimizing user experience. DYNO Mapper. https://dynomapper.com/blog/19-ux/271-usability-testing-tools. Borfitz, D. (2019, July 15). Digital assistants transforming public service. AI Trends. https://www. aitrends.com/ai-world-government/digital-assistants-transforming-public-service/. Cann, O. (2018, September 17). Machines will do more tasks than humans by 2025 but robot revolution will still create 58 million net new jobs in next five years. World Economic Forum.

References

137

https://www.weforum.org/press/2018/09/machines-will-do-more-tasks-than-humans-by-2025but-robot-revolution-will-still-create-58-million-net-new-jobs-in-next-five-years/. Cooper, M. (1986). The ecology of writing. College English, 48(4), 364–375. Danaher, J. (2018). Toward an ethics of AI assistants: An initial framework. Philosophy and Technology, 31, 629–653. https://doi.org/10.1007/s13347-018-0317-3. de Visser, E. J., Krueger, F., McKnight, P., Scheid, S., Smith, M., Chalk, S., et al. (2012). The world is not enough: Trust in cognitive agents. In Proceedings of the annual meeting of the human factors and ergonomics society (pp. 263–267). HFES. de Visser, E. J., Pak, R., & Shaw, T. H. (2018). From ‘automation’ to ‘autonomy’: the importance of trust repair in human-machine interaction. Ergonomics, 61(10), 1409–1427. Dhar, V., & Firth-Butterfield, K. (2020, September 21). To harness the AI age, governments must keep these 7 factors in mind. World Economic Forum. https://www.weforum.org/agenda/2020/ 09/to-maximize-the-ai-age-governments-must-keep-these-7-factors-in-mind/. Dignum, V. (2019). Responsible artificial intelligence: How to develop and use AI in a responsible way. Springer. Dixon-Roman, E., Nichols, T. P., & Nyame-Mensah, A. (2020). The racializing forces of/in AI educational technologies. Learning, Media and Technology, 45(3), 236–250. Downes, S. (2011). Week 1: What is connectivism? Connectivism and Connective Knowledge 2011. http://cck11.mooc.ca/week1.htm. Dubow, T. (2017). Civic engagement: How can digital technologies underpin citizen-powered democracy? Corsham Institute 2017 Thought Leadership programme. https://www.rand.org/con tent/dam/rand/pubs/conf_proceedings/CF300/CF373/RAND_CF373.pdf. Duin, A. H., & Pedersen, I. (2020, May 18–20). Building digital literacy through exploration and curation of emerging technologies: A networked learning collaborative. Publication for the International Networked Learning Conference, Kolding, Denmark. http://www.networkedlea rning.aau.dk/digitalAssets/825/825684_08.-duin—pedersen—building-digital-literacy-throughexploration-and-curation-of-emerging-technologies-a-networked-learning-collaborative.pdf. Duin, A. H., Pedersen, I., & Tham, J. (In press). Building digital literacy through exploration and curation of emerging technologies: A networked learning collaborative. In N. B. Dohn, S. B. Hansen, J. J. Hansen, M. deLaat, & T. Ryberg (Eds.), Conceptualizing and innovating education and work with networked learning. Springer. Duin, A. H., & Tham, J. (2020). The current state of analytics: Implications for learning management system (LMS) use in writing pedagogy. Computers and Composition, 55. https://www.sciencedi rect.com/science/article/pii/S8755461520300050. Elon Musk’s vision of the future takes another step forward. (2020, September 2). Economist. https://www.economist.com/science-and-technology/2020/09/02/elon-musks-vision-of-the-fut ure-takes-another-step-forward. Ferrar, J. (2019). Development of a framework for digital literacy. Reference Services Review, 47(2), 91–105. Flusser, V. (2011). Does writing have a future? University of Minnesota Press. Frontier Technology. (2020). Building a digital ecology that empowers people. Minderoo Foundation. https://www.minderoo.org/frontier-technology/. Future Says. (2020, August 10). Build the new counterculture (featuring Safiya Umoja Noble and Sarah T. Roberts) [Video]. Vimeo. https://vimeo.com/446363344. Gallagher, J. (2017). Writing for algorithmic audiences. Computers and Composition, 45, 25–35. GPT-3. (2020, September 8). A robot wrote this entire article. Are you scared yet, human? Guardian. https://www.theguardian.com/commentisfree/2020/sep/08/robot-wrote-this-article-gpt-3. Gurak, L. (2018). Ethos, trust, and the rhetoric of digital writing in scientific and technical discourse. In J. Alexander & J. Rhodes (Eds.), The Routledge handbook of digital writing and rhetoric (pp. 124–131). Taylor and Francis. Harvard Law School. (2019, February 20). Zittrain and Zuckerberg discuss encryption, ‘information fiduciaries’ and targeted advertisements [Video]. YouTube. https://www.youtube.com/watch?v= WGchhsKhG-A.

138

5 Writing Futures: Investigations

Haughey, C. J. (2019, July 30). How to improve UX with AI and machine learning. https://www.spr ingboard.com/blog/improve-ux-with-ai-machine-learning/. Havens, J. C. (2011–2016). [Collection of postings.] https://mashable.com/author/john-havens/. Havens, J.C. (2015, October 3). The importance of human innovation in A.I. ethics. Mashable. https://mashable.com/2015/10/03/ethics-artificial-intelligence/. Havens, J. C. (2016). Heartificial intelligence: Embracing our humanity to maximize machines. Penguin Random House. Havens, J. C. (2018). While we remain. Wilson Quarterly, 42(2). Hayles, N. K. (2017). Unthought: The power of the cognitive nonconscious. University of Chicago Press. Holland, C. D., Komogortsev, O., & Tamir, D. (2012, May). Identifying usability issues via algorithmic detection of excessive visual search [Conference session]. ACM Conference on Human Factors in Computing Systems, Austin, TX, USA. JISC. (2019a). Jisc digital capabilities framework: The six elements defined. http://repository.jisc. ac.uk/7278/1/BDCP-DC-Framework-Individual-6E-110319.pdf. JISC. (2019b). What is digital capability? https://digitalcapability.jisc.ac.uk/what-is-digital-capabi lity/. Kaplan, A., & Haenlein, M. (2019). Siri, Siri, in my hand: Who’s the fairest in the land? On the interpretations, illustrations, and implications of artificial intelligence. Business Horizons, 62, 15–25. Liu, H.-Y., Maas, M., Danaher, J., Scarcella, L., Lexer, M., & Van Rompaey, L. (2020). Artificial intelligence and legal disruption: A new model for analysis. Law, Innovation and Technology. https://doi.org/10.1080/17579961.2020.1815402. Mcilwain, C. D. (2020). Black software: The internet and racial justice, from the Afronet to Black lives matter. Oxford University Press. Merchant, B. (2018, October 1). When an AI Goes Full Jack Kerouac. Atlantic. https://www.theatl antic.com/technology/archive/2018/10/automated-on-the-road/571345/. Michael, K. (2020, January 31). What is public interest technology (PIT)? IEEE ISTAS20. https:// attend.ieee.org/istas-2020/2020/01/31/what-is-public-interest-technology-pit/. Mollering, G. (2001). The nature of trust: From Georg Simmel to a theory of expectation, interpretation and suspension. Sociology, 35(2), 403–420. Monahan, T. (2018). Algorithmic fetishism. Surveillance & Society, 16(1), 1–5. Musk, E., Swisher, K., & Mossberg, W. (2016, June 2). We are already cyborgs | Elon Musk | Code Conference 2016 [Video]. YouTube. https://www.youtube.com/watch?v=ZrGPuUQsDjo. Neuralink. (2020, August 28). Neuralink progress update, summer 2020 [Video]. YouTube. https:// youtu.be/DVvmgjBL74w. OpenAI. (2020). Discovering and enacting the path to safe artificial general intelligence. https:// openai.com. Orth, M. (2020). TechnoSupremacy and the final frontier: Other minds. In I. Pedersen & A. Iliadis (Eds.), Embodied computing: Wearables, implantables, embeddables, ingestibles (pp. 211–235). MIT Press. Palmer, A. (2018, January 6). Thought experiments. Economist. https://www.economist.com/tec hnology-quarterly/2018-01-06/thought-experiments. Partnership on AI. (2019). Human–AI collaboration framework and case studies. Collaborations Between People and AI Systems (CPAIS). https://www.partnershiponai.org/wp-content/uploads/ 2019/09/CPAIS-Framework-and-Case-Studies-9-23.pdf. Pedersen, I., & Iliadis, A. (Eds.). (2020). Embodied computing: Wearables, implantables, embeddables, ingestibles. MIT Press. Phelps, J. L. (2020). The transformative paradigm: Equipping technical communication researchers for socially just work. Technical Communication Quarterly. https://doi.org/10.1080/10572252. 2020.1803412. PowerfulJRE. (2018, September 6). Joe Rogan Experience #1169—Elon Musk [Video]. YouTube. https://www.youtube.com/watch?v=ycPr5-27vSI.

References

139

Powers, A., & Cardello, J. (2018, March 6). Applying machine learning to user research: 6 machine learning methods to yield user experience insights. https://medium.com/athenahealth-design/mac hine-learning-for-user-experience-research-347e4855d2a8. Przegalinska, A., Ciechanowski, L., Stroz, A., Gloor, P., & Mazurek, G. (2019). In bot we trust: A new methodology of chatbot performance measures. Business Horizons, 62, 785–797. https:// www-sciencedirect-com.ezp3.lib.umn.edu/science/article/pii/S000768131930117X. Ranade, N., & Swarts, J. (2019). Humanistic communication in information centric workplaces. Communication Design Quarterly, 7(4), 17–31. Rapkin, L. (Director). (2018). Automatic on the road [Film]. Oscillator Media. Recode. (2016, June 2). We are already cyborgs | Elon Musk | Code Conference 2016 [Video]. YouTube. https://www.youtube.com/watch?v=ZrGPuUQsDjo. Saadawi, G. M., Legowski, E., Medvedeva, O., Chavan, G., & Crowley, R. S. (2005). A method for automated detection of usability problems from client user interface events. AMIA Annual Symposium Proceedings, 654–658. Selber, S. (2020). Institutional literacies: Engaging academic IT contexts for writing and communication. University of Chicago Press. Siemens, G. (2004). Connectivism: A learning theory for the digital age. http://www.elearnspace. org/Articles/connectivism.htm. Siemens, G. (2005). Connectivism: A learning theory for the digital age. International Journal of Instructional Technology and Distance Learning. Itdl.org/Journal/Jan_05/article01.htm. Swarts, J. (2020). Technical communication is a social medium. Technical Communication Quarterly, 29(4), 427–439. https://doi.org/10.1080/10572252.2020.1774659. Taj, S., Mahoto, N. A., & Farrah, I. (2019, April 10–12). Product usability evaluation: A machine learning approach [Conference presentation]. 1st International Conference on Computational Sciences and Technologies, Jamshoro and Hyderabad, Pakistan. The Walrus. (2018, October 5). Isabel Pedersen at The Walrus Talks Humanity and Technology [Video]. YouTube. https://www.youtube.com/watch?v=7yW4Lnxd6uc. Velasquez-Manoff, M. (2020, August 28). The brain implants that could change humanity. The New York Times. https://nyti.ms/3bkAe41. Walsh, M. (2019). The algorithmic leader: How to be smart when machines are smarter than you. Publishers Group West. Waytz, A., Heafner, J., & Epley, N. (2014). The mind in the machine: Anthropomorphism increases trust in an autonomous vehicle. Journal of Experimental Social Psychology, 52, 113–117.

Appendix A: Course Syllabus for a Graduate-Level Course, Writing Futures—Collaborative, Algorithmic, Autonomous

Writing Futures: Collaborative, Algorithmic, Autonomous Course Description, Outcomes, Framework One of the greatest challenges facing rhetoric, scientific, and technical communication is a reticence to prepare for writing futures in advance of major technological transformations. While the world is clamoring to identify the agent or go-between for difficult explanations of speculative technologies, our field is often choosing to wait until after these have been deployed. The ethical dilemma is ignorance at the point of emergence, which means that our students (and the public in general) cannot properly assess technologies of great impact at the appropriate time. Consider these scenarios: Imagine you’re a professor in a post-covid world that has undergone sweeping economic and social changes—changes that have profoundly affected the nature of higher education itself. Among the realities you must navigate are new technologies—course analytics, redesigned learning management systems, assistive technologies based in artificial intelligence (AI)—many of which even a few years ago were just being developed, some of which are just now being tested. As these technologies become a part of our writing futures, how might we position communication and composition for ongoing engagement with and critique of technological emergence? As scholar-instructors, how might we work to build and study student digital literacy as part of our teaching in such a new and evolving world? Imagine you’re a professional communicator working remotely amid constant changes that have profoundly affected the nature of your local and global work. Collaboration is driven by priorities based on increased reliance on AI articulation of what is most critical for revenue streams. Curating evidence and substantiating marketing claims involves constant scraping of data sets to become literate and determine strategic business direction. How do we best position our writing © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 A. H. Duin and I. Pedersen, Writing Futures: Collaborative, Algorithmic, Autonomous, Studies in Computational Intelligence 969, https://doi.org/10.1007/978-3-030-70928-0

141

142

Appendix A: Course Syllabus for a Graduate-Level Course, Writing …

and collaboration with nonhuman devices? As practitioners, how do we deploy emerging technologies to build social, literacy, and civic engagement that meets strategic business needs? Imagine you’re a civic leader responding to urgent needs of youth homelessness. You have brought together resources and services, providing these in close contact environments. Given post-pandemic changes, you’re faced with challenges that demand collaborative, constructive social action through and with nonhuman devices. Large service components along with writing and communication with constituents must be remote. How might AI and machine learning tools support you? How might autonomous agents and digital-assistant platforms assist with services and communication? Designed for future outreach-oriented scholars/professors, this seminar provides a framework to prepare for the social, literacy, and civic implications of collaborative, algorithmic, and autonomous writing futures. Themes and associated course modules include • Collaboration, including examination of human–human and human–device work critical to writing futures; • Algorithms/Analytics/AI, including exploration of learning management systems and artificial intelligence; and • Autonomous agents, including understanding and deployment of pedagogical assistants. Local, national, and international experts will join us to provide guidance on building and applying critical and ethical expertise when designing and encountering future writing landscapes. Students will be invited to join in collaborative study and critique of technological emergence. Unique to this course is integration with Fabric of Digital Life (https://fabricofdigi tallife.com/), a database and structured content repository for conducting social and cultural analysis about emerging technologies and the social practices that surround them. Growing in content since 2013, Fabric of Digital Life provides a public, collaborative site for analyzing overwhelming technological change and the social implications that arise. Using a human-centric lens, it follows modes of technology invention over time through its corpus of videos, texts, and images. Throughout each module in this course, students can access more detail about each technology discussed by examining an associated thematic collection—the Writing Futures: Collaborative, Algorithmic, Autonomous collection—under development at Fabric of Digital Life. These collections assist in positioning ourselves as scholar-instructors and practitioners amid collaborative, algorithmic, and autonomous writing futures, futures that will include continued states of adaptation of our teaching, research, and practice.

Appendix A: Course Syllabus for a Graduate-Level Course, Writing …

143

Learning Outcomes—Writing Futures Framework By the end of the course, students should understand, articulate, and deploy a framework for investigating and planning for writing futures that includes attending to the social, literacy, and civic implications of collaboration, algorithms/artificial intelligence (AI), and autonomous agents. Investigating and planning for writing futures begins with a state of mind. To use this framework, each of us, along with our associated disciplines, must do the following: 1.

2.

3.

Abandon nostalgic notions of solo proprietary authorship. Embrace writing as dialogic, socio-technological construction of knowledge. The core guiding principle is collaboration as one works with human and nonhuman collaborators. Focus on enabling constructive, collaborative social action to foster writing futures that address grand challenges. Attend to algorithms and artificial intelligence to augment, create, and navigate volumes of information. Cultivate ambient intelligence to coordinate collection of data as machine intelligence complements human agency to contribute to learning. Enable and engage with autonomous agents and intelligent systems including AI virtual assistants, deploying robots to the front lines of mediated learning, building bonds and trust with nonhuman collaborators, and learning from and with them. Evolve and regenerate writing futures through new forms of collective intelligence.

Figure A.1 provides an overview of this draft framework, and Table A.1 includes initial detail on each component.

Assignments 20% Two weekly reports and associated discussion, 10 points each (20 pts) 10% Online discussion, exercise completion, interaction with invited speakers (10 pts) 10% Collaborative analysis of Fabric collection, focus on technological emergence (10 pts) 20% Creation and publication of a collection in support of your research (20 pts) 20% Articulation of a framework for investigating and planning for writing futures (20 pts) 20% Deployment and study of your framework (20 pts) This course will operate as a collaborative venture, one that emphasizes active involvement for those enrolled. Upon successful completion of assigned readings, exercises, and assignments, each student will have two publications (Fabric collection, Writing Futures framework, and associated study) by the end of the course.

144

Appendix A: Course Syllabus for a Graduate-Level Course, Writing …

Fig. A.1 Venn diagram of the Writing Futures framework

1.

2.

Weekly reports and associated discussion. Please explore the syllabus and sign up for two sets of readings to be “presented” to the class. This will be an online presentation/facilitation in which you summarize, critique, draw interesting connections to other course materials, and prompt discussion in our course forum. Classroom or workplace applications, literacy connections, political issues—anything you find interesting and important for illuminating the readings is critical. The grade will be based on exhibiting a thorough reading of assigned materials, providing a thoughtful response to the readings, and leading active, in-depth online discussions. If you have a specific focus (collaboration, algorithms/AI, autonomous agents), I encourage you to sign up for two consecutive weeks during that module. Online discussion, exercise completion, interaction with invited speakers. Each module will include weekly course forum discussion and exercises (prompts) in response to a set of readings. Four modules also include interaction with invited speakers. Keep in mind the importance of following the weekly cadence, sharing insights, and drawing implications for use and “testing” of the Writing Futures frameworks. Use this opportunity to broaden your professional research network.

Appendix A: Course Syllabus for a Graduate-Level Course, Writing …

145

Table A.1 Components of the Writing Futures Framework Social

Literacy

Civic engagement

Collaborative

How will writers (students/colleagues) collaborate with nonhuman agents? Socio-technological construction of knowledge; Technological embodiment; Nonhuman collaborators; Dialogic collaboration

What literacies will writers need to enable constructive, collaborative work with nonhuman agents? Digital literacy capabilities

What civic challenges demand collaborative, constructive social action through and with nonhuman agents? Risks and benefits of machines as teammates; Identifying and instilling civic dimensions across work, assignments, and tools

Algorithmic

How will algorithms & AI inform writing? Ambient intelligence; Platform studies; Demographics; Algorithmic AI; Machine learning; Virtual assistants

What AI literacies should we cultivate for algorithmic writing futures? Academic analytics; Learning management systems; AI literacy

How might AI help to recognize, ameliorate, and address global civic challenges? Harvard’s Principled Artificial Intelligence project as a heuristic; Writing for ethically aligned design

Autonomous

How will writers work with autonomous agents? Social robots; Cognitive assemblages; Digital assistant platforms; Cloud-based AI; Chatbots; Brain–computer interaction; Natural Language Generation

How will writers contextualize future uses of digital-assistant platforms throughout writing? How will literacy practices change with use of autonomous agents? Literacy for teaching AI assistants and learning from them

What affordances of autonomous agents lend themselves to more ethical, personal, professional, global, and pedagogical deployments? Nondiscrimination; AI transparency; Values and characteristics

3.

4.

Collaborative analysis of a collection at Fabric (focus on technological emergence). This should be a critique of technological emergence. This assignment lets you examine a current or emerging technology through the lens of the Writing Futures frameworks and initial readings in this course. Begin by choosing a technology(ies) from the Fabric of Digital Life archive. Once your team has selected a technology, research how the technology works and assess the social/rhetorical activities and digital literacies the technology affords. Keep in mind as well the activities that might be excluded or discouraged by the technology. In your analysis, target a balance of explaining how the technology works (30%) and assessing its affordances/constraints for writing futures (70%). Creation of a Fabric collection. Here you will create a new collection for Fabric. I encourage you to see this as a publication (peer-reviewed by Professor Isabel Pedersen, Fabric curators, and members of the Digital Life Institute), as

146

5.

Appendix A: Course Syllabus for a Graduate-Level Course, Writing …

a collection of artifacts in support of your research interest(s). This assignment includes identification of artifacts, development of metadata and keywords for each artifact, completion of an abstract for the collection, and uploading of artifacts to Fabric. Articulation of a framework for investigating and planning for writing futures. Note the questions in the scenarios: As instructors, how might we position communication and composition for ongoing engagement with and critique of technological emergence? As practitioners, how do we deploy emerging technologies to build social, literacy, and civic engagement that meets strategic business needs? As civic leaders, How might autonomous agents and digital-assistant platforms assist with services and communication?

6.

Here you articulate how this Writing Futures framework might be revised and expanded for investigating and planning for writing futures that you envision and that align with your current or planned research directions. E.g., if your research is on building digital literacy among first-year writing students, you might revise/expand each cell to articulate a Writing Futures framework for doing so. If your work focuses on user experience and usability testing, you might revise/expand each cell to articulate a framework for deploying emerging technologies throughout such work. If your goal is to use AI to address civic challenges, you might articulate the “civic” column in depth for your planned research. Together, we will discuss forms/formats this articulation might take. The goal is to develop a framework that you then deploy and study. This should include problem formation and literature in support of your framework. Deployment and investigation of your framework. While the articulation of your framework includes problem formation and literature support, this assignment includes your method and results from study of your framework. E.g., if your focus is on cowriting content with AI, you might ask a set of writers (undergraduate students, graduate students, faculty, working professionals) to experiment with a cowriting agent such as https://www.scholarcy.com/ or https:// www.summarizebot.com/, documenting and analyzing results. If your focus is on analyzing sites for use in industry, you might experiment with use of a tool such as https://www.scrapestorm.com/. If you are interested in the future of civic engagement and “data smart” solutions, you might test how natural language processes assist in using citizen input in decision-making. Again, together we will determine methods for each person’s study.

Course Schedule and Readings Module One—Weeks 1 & 2—Writing Futures; Emerging Technologies; Rhetorical Speculations What “writing futures” interest you most?

Appendix A: Course Syllabus for a Graduate-Level Course, Writing …

147

Consider technological emergence. Share rhetorical speculations and research directions. Week 1 will include a 1:1 meeting to review the syllabus, address questions, and record student learning goals and objectives. We will then meet 1:1 periodically throughout the term (most likely toward the end of each module) to monitor progress and adjust goals, making sure that each course component helps the student to understand, articulate, and deploy a framework for investigating and planning for writing futures that includes attending to the social, literacy, and civic implications of collaboration, algorithms and artificial intelligence (AI), and autonomous agents. Synchronous meeting via Zoom Isabel Pedersen, Canada Research Chair in Digital Life, Media, and Culture, and Professor, Ontario Tech University Scott Sundvall, Assistant Professor and Director of the Center for Writing and Communication, University of Memphis, editor, Rhetorical Speculations

Required readings: Duin, A. H., & Pedersen, I. (2021). Writing futures framework. In Writing futures: Collaborative, algorithmic, autonomous. Springer. Iliadis, A., & Pedersen, I. (2018). The fabric of digital life: Uncovering sociotechnical tradeoffs in embodied computing through metadata. Journal of Information, Communication and Ethics in Society, 16(3). Kearse, S. (2020, June 15). The ghost in the machine: How new technologies reproduce racial inequality. The Nation. McIlwain, C. (2020, July 1). Of course technology perpetuates racism. It was designed that way. MIT Technology Review. McIlwain, C. (2020). Podcast—Black software. Data & Society. https://listen.datasociety.net/epi sodes/black-software. Sundvall, S. (2021). The future of writing and rhetoric: Pitch by pitch. In A. H. Duin & I. Pedersen (Eds.), Writing futures: Collaborative, algorithmic, autonomous. Springer. Sundvall, S. (Ed.). (2019). Rhetorical speculations: The future of rhetoric, writing, and technology. University Press of Colorado, Utah State University Press. —Introduction by Sundvall and Weakland. —Ch 9 by Monea, A. From Aristotle to computational topoi. —Ch 10 by Lawrence, H. M. Beyond the graphic user interface: Speculations on the future of speech technology and the role of the technical communicator. Rotolo, D., Hicks, D., & Martin, B. R. (2015). What is an emerging technology? Research Policy, 44, 1827–1843.

148

Appendix A: Course Syllabus for a Graduate-Level Course, Writing …

Additional readings to be drawn from the following: Alexander, J., & Rhodes, J. (Eds.). (2018). The Routledge handbook of digital writing and rhetoric. Routledge. —Ch 6: Yancey, K. B. “With fresh eyes”: Notes toward the impact of new technologies on composing. —Ch 7: Maps, A. C., & Hea, A. C. K. Devices and desires: A complicated narrative of mobile writing and device-driven ecologies. —Ch 8: Takayoshi, P., & Van Ittersum, D. The material, embodied practices of composing with technologies. —Ch 10: Faulkner, J. Making and remaking the self through digital writing. —Ch 12: Gurak, L. Ethos, trust, and the rhetoric of digital writing in scientific and technical discourse. —Ch 13: Losh, E. when walls can talk: Animate cities and digital rhetoric. —Ch 23: Hart-Davidson, W. Writing with robots and other curiosities of the age of machine rhetorics. —Ch 27: Beck, E. Implications of persuasive computer algorithms. —Ch 41: Moulthrop, S. “Just not the future”: Taking on digital writing. Benjamin, R. (2019). Race after technology: Abolitionist tools for the new Jim Code. Polity Press. Bolter, J. D. (1991). The writing space: the computer, hypertext, and the history of writing. Hillsdale, NJ: L. Erlbaum Associates. Brown et al. (2020). 2020 EDUCAUSE Horizon Report: Teaching and Learning edition. EDUCAUSE. Note pp. 17–22. Duin, A. H., Armfield, D., & Pedersen, I. (2019). Human-centered content design in augmented reality. In G. Getto, N. Franklin, S. Ruszkiewicz, & J. Labriola (Eds.), Context is everything: Teaching content strategy (pp. 89–116). ATTW Book Series in Technical and Professional Communication. New York: Routledge [available on Canvas]. Flusser, V. (2011). Does writing have a future? University of Minnesota Press. Godin, B. (2019). The invention of technological innovation: Languages, discourses and ideology in historical perspective. Edward Elgar Publishing. O’Connor, T. (2020). Emergent properties. Stanford Encyclopedia of Philosophy. Pedersen, I., & Iliadis, A. (Eds.). (2020). Embodied computing: Wearables, implantables, embeddables, ingestibles. MIT Press. Potts, J. (Ed.). (2014). The future of writing. Palgrave Macmillan [section available on Canvas]. Smith-Doer, L., Zilberstein, S., Wilkerson, T., Roberts, S., Renski, H., Green, V., & Branch, E. H. *(2019). HTF (the future of work at human-technology frontiers): Understanding emerging technologies, racial equity, and the future of work. National Science Foundation Workshop Report. Stommel, J., Friend, C., & Morris, S. M. (Eds.). (2020). Critical digital pedagogy: A collection. —Ch 2 by Rheingold, H. Technology 101: What do we need to know about the future we’re creating? Vinson, T. (2020). Setting intentions: Considering racial justice implications of facial recognition technology. MA thesis. Georgetown University. Assignment: Collaborative analysis of a collection

Module Two—Weeks 3–5—Collaboration/Digital Literacy How might I collaborate with nonhuman agents? How might we position collaboration with nonhuman agents as a means to incite change—real rhetorical and material change—across our classrooms, industries, and communities?

Appendix A: Course Syllabus for a Graduate-Level Course, Writing …

149

What constructive social action might be deployed through and with nonhuman agents? Share a narrative(s) of your human–machine coexistence/collaboration. Abandon nostalgic notions of solo proprietary authorship. Embrace writing as dialogic, socio-technological construction of knowledge. The core guiding principle is collaboration as one works with human and nonhuman collaborators. Focus on enabling constructive, collaborative social action to foster writing futures that address grand challenges. Social Collaborative How will writers (students/colleagues) collaborate with nonhuman agents? Socio-technological construction of knowledge; Technological embodiment; Nonhuman collaborators; Dialogic collaboration

Literacy

Civic engagement

What literacies will writers need to enable constructive, collaborative work with nonhuman agents? Digital literacy capabilities

What civic challenges demand collaborative, constructive social action through and with nonhuman agents? Risks and benefits of machines as teammates; Identifying and instilling civic dimensions across work, assignments, and tools

Synchronous meeting via Zoom Aleksandra Przegalinska, Associate Professor, MIT and Kozminski University https://www.linkedin.com/in/aleksandra-przegalinska-5b17125/ Video: How I Fell in Love with AI (13 min.), 2020 Video (TEDx talk): Sympathy for the Future (11 min.), from 2016

Required readings: Boyle, C. (2016). Writing and rhetoric and/as posthuman practice. College English, 78, 532–54. Ciechanowski, L., Przegalinska, A., & Wegner, K. (2018). The necessity of new paradigms in measuring human-chatbot interaction. In M. Hoffman (Ed.), Advances in cross-cultural decision making (pp. 205–214). Ciechanowski, L., Przegalinska, A., Magnuski, M., & Gloor, P. (2019). In the shades of the uncanny valley: An experimental study of human-chatbot interaction. Future Generation Computer Systems, 92, 539–548. Cizek, K., Uricchio, W., & Wolozin, S. (2019). Part 6: Media co-creation with non-human systems. MIT Press. Duin, A. H., & Pedersen, I. (2021). Collaborative writing futures. In Writing futures: Collaborative, algorithmic, autonomous. Springer Book Series, Studies in Computational Intelligence. Duin, A. H., Tham, J., & Pedersen, I. (2021). The rhetoric, science, and technology of 21st century collaboration. In M. Klein (Ed.), Effective teaching of technical communication: Theory, practice and application in the workplace. ATTW Book Series in Technical and Professional Communication. New York: Routledge. Gourlay, L. (2015). Posthuman texts: Nonhuman actors, mediators and the digital university. Social Semiotics, 25(4), 484–500. Przegalinska, A., Ciechanowski, L., Stroz, A., Gloor, P., & Mazurek, G. (2019). In bot we trust: A new methodology of chatbot performance measures. Business Horizons, 62, 785–797.

150

Appendix A: Course Syllabus for a Graduate-Level Course, Writing …

Seeber, I. et al. (2020). Machines as teammates: A research agenda on AI in team collaboration. Information & Management, 57, 1–22. Tsvetkova, M. et al. (2017). Understanding human-machine networks: A cross-disciplinary survey. ACM Computing Surveys, 50(1). https://dl.acm.org/doi/10.1145/3039868.

Additional readings to be drawn from the following: Agboka, G. Y., & Matveeva, N. (Eds.). (2018). Citizenship and advocacy in technical communication: Scholarly and pedagogical perspectives. Routledge [selected chapters]. Alexander, J. (2008). Literacy, sexuality, pedagogy: Theory and practice for composition studies. Utah State University Press. Duffy, C. (2019). The future of writing. Publishers Weekly. Jemielniak, D., & Przegalnska, A. (2020). Collaborative society. MIT Press. Jones, N. N., Moore, K. R., & Walton, R. (2016). Disrupting the past to disrupt the future: An antenarrative of technical communication. Technical Communication Quarterly, 25(4), 211– 229. Liu, Y. (2019). Designing alternative narratives of human-machine coexistence: An exploration through five machines. Moore, K. R., & Richards, D. P. (2018). Posthuman praxis in technical communication. Routledge. Note Ch 6, Read, S. Writing down the machine: Enacting Latourian ethnography to trace how a supercomputer circulates the halls of Washington, DC as a report. Rude, C. (2009). Mapping the research questions in technical communication. Journal of Business and Technical Communication, 23(2): 174–201. Sowa, K., & Przegalinska, A. (2020). Digital coworker: Human-AI collaboration in work environment, on the example of virtual assistants for management professions. In A. Przegalinska, F. Grippa, F., & P. A. Gloor (Eds.), Digital transformation of collaboration: Proceedings of the 9th International COINs Conference (pp. 179–201). Springer. St. Amant, K., & Sapienza, F. (Eds.) (2011). Culture, communication, and cyberspace: Rethinking technical communication for international online environments. Amityville, NY: Baywood Pub. Co. Selber, S., & Johnson-Eilola, J. (Eds.) (2013). Solving problems in technical communication. Chicago, IL: University of Chicago Press. Spilka, R. (Ed.). (2009). Digital literacy for technical communication. Routledge. Spinuzzi, C. (2015). All edge: Inside the new workplace networks. University of Chicago Press. Sun, H. (2006). The triumph of users. Technical Communication Quarterly, 15(4), 457–481. Wilks, Y. (Ed.). (2010). Close engagements with artificial companions: Key social, psychological, ethical and design issues. Amsterdam: John Benjamins Publishing Company. Assignment: Creation of a Fabric collection (personal publication)

Appendix A: Course Syllabus for a Graduate-Level Course, Writing …

151

Module Three—Weeks 6–8—Algorithms/Artificial Intelligence/Ethics Attend to algorithms and artificial intelligence to augment, create, and navigate volumes of information. Cultivate ambient intelligence to coordinate collection of data as machine intelligence complements human agency to contribute to learning.

Algorithmic

Social

Literacy

Civic engagement

How will algorithms & AI inform writing? Ambient intelligence; Platform studies; Demographics; Algorithmic AI; Machine learning; Virtual assistants

What AI literacies should we cultivate for algorithmic writing futures? Academic analytics; Learning management systems; AI literacy

How might AI help to recognize, ameliorate, and address global civic challenges? Harvard’s Principled Artificial Intelligence project as a heuristic; Writing for ethically aligned design

Synchronous meeting via Zoom Illah Reza Nourbakhsh, Professor, Carnegie Mellon University, & Jennifer Keating, Writing in the Disciplines Specialist, University of Pittsburgh, coauthors, AI and Humanity Heidi McKee and James Porter, Professors, Miami University, coauthors, Professional Communication and Network Interaction: A Rhetorical and Ethical Approach

Required readings: Dixon-Roman, E., Nichols, T. P., & Nyame-Mensah, A. (2020). The racializing forces of/in AI educational technologies. Learning, Media and Technology, 45(3), 236–250. Duin, A. H., & Pedersen, I. (2021). Algorithmic writing futures. In Writing futures: Collaborative, algorithmic, autonomous. Springer Book Series, Studies in Computational Intelligence. Duin, A. H., & Tham, J. (2020). The current state of analytics: Implications for learning management system (LMS) use in writing pedagogy. Computers and Composition. Eatman, M. E. (2020). Unsettling vision: Seeing and feeling with machines. Computers and Composition (online). Ethically aligned design: A vision for prioritizing human well-being with autonomous and intelligent systems. (2019). IEEE Advancing Technology for Humanity. Harrod, J. (2020). AI literacy, or why understanding AI will help you every day. TEDxBeaconStreet. Gallagher, J. (2020). The ethics of writing for algorithmic audiences. Computers and Composition, online. Gallagher, J. (2017). Writing for algorithmic audiences. Computers and Composition, 45, 25–35. Government trends. (2020). Deloitte Insights: A report from the Deloitte Center for Government Insights. Kaplan, A., & Haenlein, M. (2019). Siri, Siri, in my hand: Who’s the fairest in the land? On the interpretations, illustrations, and implications of artificial intelligence. Business Horizons, 62(1), 15–25. Note figure 1, p. 16, stages of AI; table 1 illustrating AI applications in universities, corporations, & governments; and table 2 on the three C’s (confidence, change, control). Keating, J., & Nourbakhsh, I. (2021). Recoding relationships. In Writing futures: Collaborative, algorithmic, autonomous. Springer Book Series, Studies in Computational Intelligence. Keating, J., & Nourbakhsh, I. (2018). Teaching artificial intelligence and humanity: Considering rapidly evolving human-machine interactions. Communications of the ACM, 61(2), 29–32.

152

Appendix A: Course Syllabus for a Graduate-Level Course, Writing …

McKee, H. A., & Porter, J. E. (2020, February 7–8). Ethics for AI writing: The importance of rhetorical context. In Proceedings of 2020 AAAI/ACM Conference on AI, Ethics, and Society (AIES’20). New York, NY, USA. https://doi.org/10.1145/3375627.3375811. McKee, H. A., & Porter, J. E. (2018). The impact of AI on writing and writing instruction. Digital Rhetoric Collaborative, Sweetland Center for Writing, University of Michigan. http://www.dig italrhetoriccollaborative.org/2018/04/25/ai-on-writing/. McKee, H. A., & Porter, J. E. (2017). Professional Communication and Network Interaction: A Rhetorical and Ethical Approach. Routledge. —Ch 7—AI agents as professional communicators McKee, H. A., & Porter, J. E. (2021). Writing machines and rhetoric. In Writing futures: Collaborative, algorithmic, autonomous. Springer Book Series, Studies in Computational Intelligence. Noble, S. U. (2018). Algorithms of oppression: How search engines reinforce racism. NYU Press. Introduction. The future of knowledge in the public. West, S. M., Whittaker, M., & Crawford, K. (2019). Discriminating systems: Gender, race and power in AI. AI Now Institute.

Additional readings to be drawn from the following: A robot wrote this entire article. Are you scared yet, human? (2020). The Guardian. Babic, D., Chan, D. L., Evgeniou, T., & Fayard, A.-L. (2020). A better way to onboard AI. Harvard Business Review, July–August 2020 issue. Bryson, J., & Winfield, A. (2017). Standardizing ethical design for artificial intelligence and autonomous systems. Computer, 50(5), 116–119. https://doi.org/10.1109/MC.2017.154. Danaher, J. (2018). Toward an ethics of AI assistants. Philosophy & Technology, 31, 629–653. Figaredo, D. D. (2020). Data-driven educational algorithms pedagogical framing. RIED. Revista Iberoamericana de Educacion a Distancia, 23(2), 65–84. Fjeld, J., Achten, N., Hilligoss, H., Nagy, A., & Srikumar, M. (2020). Principled artificial intelligence: Mapping consensus in ethical and rights-based approaches to principles for AI. Harvard. Floridi, L., & Cowls, J. (2019). A unified framework of five principles for AI in society. Harvard Data Science Review, 1(1). https://doi.org/10.1162/99608f92.8cd550d1. Jerelyn, A. (2019). Artificial intelligence as smart writing. https://www.guide2write.com/artificialintelligence-as-smart-writing/. Keating, J. (2019). R.U. robot ready? Carnegie Mellon University students face the societal implications of AI. Liberal Education, 105(3–4), 42–27. Kim, P. W. (2019). Ambient intelligence in a smart classroom for assessing students’ engagement levels. Journal of Ambient Intelligence and Humanized Computing, 10, 3847–3852. Mariachi, R., & Quill, L. (2020). The case of Canvas: Longitudinal datafication through learning management systems. Teaching in Higher Education, 25(4), 418–434. Mitchell, M. (2019). Artificial intelligence: A guide for thinking humans. NY: Ferrar, Straus & Giroux. Montebello, M. (2019). The ambient intelligent classroom: Beyond the indispensable educator. Springer. Nourbakhsh, I. R., & Keating, J. (2019). AI and humanity. MIT Press. Pedersen, I. (2020). Will the body become a platform? Body networks, datafied bodies, and AI futures. In I. Pedersen & Iliadis, A. (Eds.), Embodied computing: Wearables, implantables, embeddables, ingestibles (pp. 21–48). Cambridge, MA: MIT Press. Popenici, S. A. D., & Kerr, S. (2017). Exploring the impact of artificial intelligence on teaching and learning in higher education. Research and Practice in Technology Enhanced Learning. Shneiderman, B. (2020). Human-centered artificial intelligence: Reliable, safe & trustworthy. International Journal of Human-Computer interaction, 36(6), 495–504. Stommel, J., Friend, C., & Morris, S. M. (Eds.). (2020). Critical digital pedagogy: A collection. —Ch 6 by Swauger, S. Our bodies encoded: Algorithmic test proctoring in higher education.

Appendix A: Course Syllabus for a Graduate-Level Course, Writing …

153

Sundvall, S. (2019). Artificial intelligence. In H. Paul (Ed.), Critical terms in futures studies (pp. 29– 34). Springer. Vincent, J. (2019). OpenAI has published the text-generating AI it said was too dangerous to share. Walorska, A. M. (2020). The algorithmic society. In D. Feldner (Ed.), Redesigning organizations (pp. 149–160). Springer. Willson, M. (2020). Questioning algorithms and agency: Facial biometrics in algorithmic contexts. In M. Filimowicz & V. Tzankova (Eds.), Reimagining communication: Mediation (pp. 252–266). Routledge. Assignment: Articulation of a framework for investigating and planning for writing futures.

Module Four—Weeks 9–11—Autonomous Agents/Ethics Enable and engage with autonomous agents and intelligent systems including AI virtual assistants, deploying robots to the front lines of mediated learning, building bonds and trust with nonhuman collaborators, and learning from and with them. Evolve and regenerate writing futures through new forms of collective intelligence. Social Autonomous How will writers work with autonomous agents? Social robots; Cognitive assemblages; Digital assistant platforms; Cloud-based AI; Chatbots; Brain–computer interaction; Natural Language Generation

Literacy

Civic engagement

How will writers contextualize future uses of digital-assistant platforms throughout writing? How will literacy practices change with use of autonomous agents? Literacy for teaching AI assistants and learning from them

What affordances of autonomous agents lend themselves to more ethical, personal, professional, global, and pedagogical deployments? Nondiscrimination; AI transparency; Values and characteristics

Synchronous meeting via Zoom Richard Pak, Professor and Director, Human Factors Institute, Clemson University, coeditor, Living with Robots https://www.clemson.edu/centers-institutes/hfi/index.html https://blogs.clemson.edu/catlab/people/richard-pak/

Required readings: de Visser, E. J., Pak, R., & Shaw, T. H. (2018). From ‘automation’ to ‘autonomy’: The importance of trust repair in human-machine interaction. Ergonomics, 61(10), 1409–1427. Duin, A. H., & Pedersen, I. (2021). Autonomous writing futures. In Writing futures: Collaborative, algorithmic, autonomous. Springer Book Series, Studies in Computational Intelligence. Pak, R., de Visser, E. J., & Rovira, E. (Eds.) (2020). Living with robots: Emerging issues on the psychological and social implications of robotics. Academic Press (Elsevier). —Ch 7. Lum, H. C. (2020). The role of consumer robots in our everyday lives. —Ch 9. McNeese, M. D., & McNeese, N. J. (2020). Humans interacting with intelligent machines: At the crossroads of symbiotic teamwork.

154

Appendix A: Course Syllabus for a Graduate-Level Course, Writing …

Satinder, P. G. (2008). Socio-ethics of interaction with intelligent interactive technologies, AI & Soc, 22, 283–300. Siemens, G. (2020). The post-learning era in higher education: Human + machine. Educause.

Additional readings to be drawn from the following: Alexander, J., & Rhodes, J. (Eds.). (2018). The Routledge handbook of digital writing and rhetoric. Routledge. —Ch 23: Hart-Davidson, W. Writing with robots and other curiosities of the age of machine rhetorics. Aoun, J. (2017). Robot-proof: Higher education in the age of artificial intelligence. MIT Press. Matyus, A. (2020). Need a friend? Samsung’s new humanoid chatbots known as Neons can show emotions. https://www.digitaltrends.com/news/samsung-humanoid-chatbots-ces-2020-neons/. Seeber, I., Bittner, E., Briggs, R. O., de Vreede, T., de Vreede, G.-J., Elkins, A., Maier, R., Merz, A. B., Oeste-Reiss, S., Randrup, N., Schwabe, G., & Sollner, M. (2020). Machines as teammates: A research agenda on AI in team collaboration. Information & Management, 57.https://doi.org/ 10.1016/j.im.2019.103174. Selwyn, N. (2019). Should robots replace teachers? Cambridge, UK: Polity Press. Wilks, Y. (Ed.) (2010). Close engagements with artificial companions: Key social, psychological, ethical and design issues. John Benjamins Publishing Company. Assignment: Deployment and study of the student’s Writing Futures framework.

Module Five—Weeks 12–13—Investigations Synchronous meeting for presentation of student research (deployment and study of each Writing Futures framework) Duin, A. H., & Pedersen, I. (2021). Investigating writing futures. In Writing futures: Collaborative, algorithmic, autonomous. Springer Book Series, Studies in Computational Intelligence.

Module Six—Weeks 14–15—Integrating Visions for the Future Final course forum discussion and 1:1 meetings Assignment: Final report of investigation of the student’s Writing Futures framework.

Appendix B: Complete List of General Keywords in Writing Futures Collection with Counts

A Ableism (1) Academia (1) Academics (1) Accessibility (4) Accuracy (1) Activism (1) Adventure (1) Advertising (2) Age-in-Place (1) Agency (4) Aging (5) AI Ethics (1) AI Literacy (59) American Sign Language (ASL) (1) Amyotrophic Lateral Sclerosis (ALS) (1) Anxiety (1) Art (9) Artists (3) Athletes (1) Athletics (1) Attention Span (1) Automation (71) Autonomy (1)

© The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 A. H. Duin and I. Pedersen, Writing Futures: Collaborative, Algorithmic, Autonomous, Studies in Computational Intelligence 969, https://doi.org/10.1007/978-3-030-70928-0

155

156

Appendix B: Complete List of General Keywords in Writing Futures …

B Babies (1) Behavior (2) Bias (7) Blind and Low Vision (2) Brain Function (2) Business (36) C Cancer (1) Caregivers (1) Children (10) Chronic Disease (1) City (1) Civic engagement (19) Classrooms (10) Climate Change (1) Collaboration (103) Colors (1) Combat (1) Communication (145) Community (3) Companions (8) Composition (34) Computer Literacy (1) Conferences (1) Consent (2) Consumer Reports (1) Contact tracing (1) Controversy (1) COVID-19 (13) Creativity (27) Crimes (1) Critical Thinking (1) Culture (1) Customer Service (9) Customizable (6) D Deaf and Hard of Hearing (2) Decentralization (1) Demonstration (12) Deoxyribonucleic Acid (DNA) (1) Design (13)

Appendix B: Complete List of General Keywords in Writing Futures …

Diabetes (1) Digital Humanities (1) Digital Literacy (41) Dignity (1) Disabilities (3) Discontinued Product (1) Discrimination (5) Disinformation (1) Distraction Management (1) Diversity (1) Doctors (2) Drawing (1) E Economics (1) Education (36) Efficiency (6) Elderly (2) Embodied Learning (1) Emergency (3) Emotions (13) Empathy (3) Entertainment (8) Environment (5) Epilepsy (1) Ethics (52) Everyday Life (6) Exercise (2) F Facial Expressions (1) Fairness (1) Fake News (4) Family (4) Fashion (3) Fear (2) Film (1) Fitness (4) Flexible (1) Food (3) Freedom (1) Friends (2) Future Scoping (1) Futurism (19)

157

158

Appendix B: Complete List of General Keywords in Writing Futures …

G Games (7) Gender (1) Geneva (1) Gestures (3) Global Economy (1) Governance (2) Government (12) H Hate Speech (1) Health (14) Healthcare (5) Heart Rate (1) History (4) Home (6) Hospitality (3) Hospitals (1) Human Resources Management (HRM) (2) Human Rights (5) Humor (2) Hype (1) I Identity (2) Imagination (3) Immersive (1) Industry (6) Information (11) Information Literacy (1) Innovation (7) Intelligence (2) Interactive (5) Intercultural Communication (2) International Communication (1) International Cooperation (1) Intimacy (1) J Journalism (18) K Kids (1)

Appendix B: Complete List of General Keywords in Writing Futures …

L Labor (4) Language (12) Law (9) Learning (16) Lifestyle (22) Linguistics (1) Literacy (4) Literature (2) Lockdown (1) Logistics (1) Loneliness (2) Long-Term Care (1) Love (1) M Manufacturing (7) Marketing (4) Medical (12) Medicine (1) Meditation (1) Mental Health (2) Metaphor (1) Military (3) Mind-Reading (1) Morality (1) Museum (1) Music (11) N Navigation (3) Nonhuman collaboration (1) O Occupational Health and Safety (OH&S) (1) Office (3) Older Adults (1) Organizational Behavior (2) P Pandemic (8) Panopticon (1) Parody (2) Patents (2)

159

160

Appendix B: Complete List of General Keywords in Writing Futures …

Patients (2) Payment (1) Pedagogy (6) Performance (2) Personalization (8) Philosophy (3) Photography (3) Physical Space (3) Poetry (1) Policy (6) Politics (4) Posthumanism (1) Power (1) Pre-Release (9) Predictions (3) Pregnancy (1) Principles (3) Privacy (26) Productivity (12) Professional Communication (68) Propaganda (2) Protection (1) Prototypes (7) Proximity (1) Psychology (2) Public Health (5) Q Quantified Self (2) Quantified Work (2) R Racism (2) Regulation (1) Relationships (1) Reports (1) Representation (1) Research (30) Restaurants (2) Retail (7) Rings (5) S Safety (8) Schedules (1)

Appendix B: Complete List of General Keywords in Writing Futures …

School (1) Science (3) Science Communication (1) Science Fiction (2) Security (17) Seniors (2) Senses (2) Servant (1) Shopping (2) Sleep (1) Social Distancing (7) Social Interaction (33) Social Issues (4) Social Media (4) Society (1) Sociology (1) Sociometry (1) Sounds (3) Space Travel (1) Speech (1) Sports (3) Statistics (1) Storytelling (9) Students (2) Study (1) Style (1) Surveillance (8) Survey (2) T Taylorism (1) Teamwork (13) Technical Communication (42) Techno-Capitalism (1) Telepathic sublime (1) Therapeutic (3) Touch (1) Training (1) Transhumanism (1) Translation (5) Transportation (3) Travel (5) Trial (1) Trust (2)

161

162

Appendix B: Complete List of General Keywords in Writing Futures …

U Uncanny Valley (1) Unintended consequences (1) United Nations (1) United States (2) Urban (5) V Values (2) Vision (2) Visual Literacy (1) W Weapons (2) Wellness (1) Women (1) Work (59) Workplace Sociality (1) Y Yoga (1) Z Zoom Bombing (1)