Codes of Ethics and Ethical Guidelines: Emerging Technologies, Changing Fields (The International Library of Ethics, Law and Technology, 23) 3030862003, 9783030862008

This book investigates how ethics generally precedes legal regulation, and looks at how changes in codes of ethics repre

117 25 5MB

English Pages 267 [259] Year 2022

Report DMCA / Copyright

DOWNLOAD FILE

Polecaj historie

Codes of Ethics and Ethical Guidelines: Emerging Technologies, Changing Fields (The International Library of Ethics, Law and Technology, 23)
 3030862003, 9783030862008

Table of contents :
Acknowledgments
Contents
About the Editors
Chapter 1: An Introduction to the Societal Roles of Ethics Codes
1.1 Roles of Ethics Codes and Ethical Guidelines and Why They Are Important
1.2 Ethics Codes Collection: An Introduction
1.3 Ethics Codes as Sources for Research into the Professions
1.4 Ethics Codes and Emerging Technologies
1.5 About This Collected Volume
References
Part I: Past, Present, and Future: The Role of Ethics Codes and Guidelines in Changing Fields
References
Chapter 2: Research Ethics Guidelines for the Engineering Sciences and Computer and Information Sciences
2.1 Introduction
2.2 The Need for Research Ethics for Nonmedical Fields
2.3 Guidelines for the Engineering Sciences
2.4 Guidelines for the Computer and Information Sciences
2.5 Conclusion
References
Chapter 3: Codes of Engineering Ethics: Recent Trends
3.1 Introduction
3.2 Clarification
3.3 Globalization
3.4 Sustainability
3.5 Short Versus Long
3.6 Conclusion
References
Chapter 4: Informed Consent in Digital Data Management
4.1 Introduction
4.2 Role of Ethics Codes and Guidelines in Process
4.3 Ethics Codes and Guidelines in Digital Data Management
4.4 Models of Practice: Informed Consent
4.5 Study Methodology
4.6 Overview – Appearance of Informed Consent-Related Standards in Ethics Codes and Guidelines
4.7 A Closer View on Informed Consent-Related Standards in Ethics Codes and Guidelines
4.7.1 Gatekeeper Function
4.7.2 Transparency
4.7.3 Consumer Control
4.7.4 Type and Amount of Data Collected
4.7.5 Data Sharing
4.8 Discussion
4.9 Conclusion
References
Chapter 5: Codes of Ethics and Research Integrity
5.1 Introduction
5.2 Research Integrity vs Research Ethics
5.3 Research Integrity and Codes of Ethics
5.4 Changes in Research Integrity Concepts
5.5 Recommendations to Professional Organizations
5.6 Conclusion
References
Part II: Ethics Codes, Emerging Technologies, and International Viewpoints
References
Chapter 6: The Significance of Professional Codes and Ethical Guidelines in Difficult Clinical Situations
6.1 Background and Introduction
6.2 Normative Framework
6.2.1 Law
6.2.2 Soft Law and Guidelines
6.2.3 Codes
6.3 Application to Difficult Clinical Situations
6.3.1 The Example of CPR: Regulation and Orientation through Law, Ethical and Medical Guidelines
6.3.2 The Development from a Clinical Ethics Policy to a Hospital Directive
6.4 Discussion and Outlook
6.5 Points to Consider
References
Chapter 7: Global AI Ethics Documents: What They Reveal About Motivations, Practices, and Policies
7.1 Introduction
7.2 The New AI Spring and the Codification of AI Ethics
7.3 A Review of Research Studies on AI Ethics Documents
7.3.1 Consensus on Ethical Topics
7.3.2 Representation and Power
7.3.3 Principles to Practice
7.4 Building on the AI Ethics Literature
7.5 A Typology of Motivations
7.5.1 Motivations One and Two: Goals
7.5.2 Motivations Three and Four: Strategies
7.5.3 Motivations Five and Six: Signaling
7.6 Efficacy
7.7 Lessons from CRISPR
7.8 Conclusion
References
Chapter 8: Addressing Intelligent Systems and Ethical Design in the IEEE Code of Ethics
8.1 Introduction
8.2 Background and Motivation for Changes
8.3 Process
8.4 Specific Changes
8.5 Conclusion
References
Chapter 9: Technocracy, Public Optimism, and National Progress: Constructing Ethical Guidelines for Responsible Nano Research and Development in China
9.1 Introduction
9.2 Technocracy
9.3 Public Optimism
9.4 National Progress
9.5 Conclusion
References
Chapter 10: The Historical Process and Challenges of Medical Ethics Codes in China
10.1 Introduction
10.2 Ethical Guidelines for Human Embryonic Stem Cell Research
10.2.1 Debate on Human Embryonic Stem Cell Research (Background of Guidelines)
10.2.2 Development of Ethical Guidelines for Human Embryonic Stem Cell Research
10.2.3 Problems with and Suggestions for the Ethical Guidelines for Human Embryonic Stem-Cell Research (Guidelines)
10.3 Ethical Review of Biomedical Research Involving Human Beings
10.3.1 Background of Measures (Trial Implementation)
10.3.2 Problems with the Measures (Trial Implementation) and Amendment to It
10.3.3 Characteristics of and Prospects for the Measures
10.4 Conclusion
References
Part III: Introduction: New Approaches to Ethics Codes: Changing Purposes, Differing Views
Chapter 11: Mentions of Ethics Codes in Social Media: A Twitter Analysis
11.1 Introduction
11.2 Methods: Data Collection and Processing
11.3 Results
11.3.1 Tweets Collected
11.3.2 User Characterization
11.3.2.1 Most Influential Users
11.3.2.2 Most Active Users
11.3.2.3 Most Mentioned Users
11.3.3 Spikes of Tweets Containing “Code of Ethics”
11.4 Discussion
11.5 Conclusion
References
Chapter 12: The Technology’s Fine; It’s the Code of Professional Ethics That Needs Changing
12.1 Introduction
12.2 Sexbots
12.3 Codes of Professional Ethics’ Development and Challenges
12.4 Moral Psychology: How Is Morality Possible?
12.5 A Pragmatic Professional Code of Ethics
12.6 Conclusion
References
Chapter 13: On Leaving and Receiving Traces: Thoughts on an Un-professional Code of Ethics
13.1 Introduction
13.2 Echo Canyon
13.3 Trailhead: South Kaibab
13.4 The Chisos Mountains
13.5 The Emma Dean
13.6 The Garden
13.7 Conclusion
References
Chapter 14: The Responsibility of Researchers and Engineers: Codes of Ethics for Emerging Technologies
14.1 Codes of Ethics: Solution to What Problem?
14.2 Responsibility in the Governance of Technology
14.3 The Empirical Dimension of Responsibility in Engineering
14.4 The Specific Responsibility of Engineers and Researchers
14.5 Codes of Ethics
14.6 Codes of Ethics in Context
14.6.1 Codes of Ethics Versus Regulation
14.6.2 Codes of Ethics in Global Context
References

Citation preview

The International Library of Ethics, Law and Technology 23

Kelly Laas Michael Davis Elisabeth Hildt   Editors

Codes of Ethics and Ethical Guidelines

Emerging Technologies, Changing Fields

The International Library of Ethics, Law and Technology Volume 23

Series Editors Bert Gordijn, Ethics Institute, Dublin City University, Dublin, Dublin, Ireland Sabine Roeser, Philosophy Department, Delft University of Technology,  Delft, The Netherlands Editorial Board Dieter Birnbacher, Institute of Philosophy, Heinrich-Heine-Universität, Düsseldorf, Nordrhein-Westfalen, Germany Roger Brownsword, Law, Kings College London, London, UK Ruth Chadwick, ESRC Centre for Economic and Social Aspe, Cardiff, UK Paul Stephen Dempsey, University of Montreal, Institute of Air & Space Law, Montreal, Canada Michael Froomkin, Miami Law, University of Miami, Coral Gables, Florida, USA Serge Gutwirth, Campus Etterbeek, Vrije Universiteit Brussel, Elsene, Belgium Henk Ten Have, Center for Healthcare Ethics, Duquesne University, Pittsburgh, Pennsylvania, USA Søren Holm, Centre for Social Ethics and Policy, The University of Manchester, Manchester, UK George Khushf, Department of Philosophy, University of South Carolina, Columbia, South Carolina, South Carolina, USA Justice Michael Kirby, High Court of Australia, Kingston, Australia Bartha Knoppers, Université de Montréal, Montreal, Québec, Canada David Krieger, The Waging Peace Foundation, Santa Barbara, California, USA Graeme Laurie, AHRC Centre for Intellectual Property and Technology Law, Edinburgh, UK René Oosterlinck, European Space Agency, Paris, France John Weckert, Charles Sturt University, North Wagga Wagga, Australia

Technologies are developing faster and their impact is bigger than ever before. Synergies emerge between formerly independent technologies that trigger accelerated and unpredicted effects. Alongside these technological advances new ethical ideas and powerful moral ideologies have appeared which force us to consider the application of these emerging technologies. In attempting to navigate utopian and dystopian visions of the future, it becomes clear that technological progress and its moral quandaries call for new policies and legislative responses. Against this backdrop, this book series from Springer provides a forum for interdisciplinary discussion and normative analysis of emerging technologies that are likely to have a significant impact on the environment, society and/or humanity. These will include, but be no means limited to nanotechnology, neurotechnology, information technology, biotechnology, weapons and security technology, energy technology, and space-based technologies. More information about this series at https://link.springer.com/bookseries/7761

Kelly Laas • Michael Davis • Elisabeth Hildt Editors

Codes of Ethics and Ethical Guidelines Emerging Technologies, Changing Fields

Editors Kelly Laas Center for the Study of Ethics in the Professions Illinois Institute of Technology Chicago, Illinois, USA

Michael Davis Center for the Study of Ethics in the Professions Illinois Institute of Technology Chicago, Illinois, USA

Elisabeth Hildt Center for the Study of Ethics in the Professions Illinois Institute of Technology Chicago, Illinois, USA

ISSN 1875-0044     ISSN 1875-0036 (electronic) The International Library of Ethics, Law and Technology ISBN 978-3-030-86200-8    ISBN 978-3-030-86201-5 (eBook) https://doi.org/10.1007/978-3-030-86201-5 © Springer Nature Switzerland AG 2022 This work is subject to copyright. All rights are reserved by the Publisher, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations, recitation, broadcasting, reproduction on microfilms or in any other physical way, and transmission or information storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar methodology now known or hereafter developed. The use of general descriptive names, registered names, trademarks, service marks, etc. in this publication does not imply, even in the absence of a specific statement, that such names are exempt from the relevant protective laws and regulations and therefore free for general use. The publisher, the authors, and the editors are safe to assume that the advice and information in this book are believed to be true and accurate at the date of publication. Neither the publisher nor the authors or the editors give a warranty, expressed or implied, with respect to the material contained herein or for any errors or omissions that may have been made. The publisher remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. This Springer imprint is published by the registered company Springer Nature Switzerland AG The registered company address is: Gewerbestrasse 11, 6330 Cham, Switzerland

Acknowledgments

This collected volume has been made possible through a generous grant from the John D. and Catherine T. MacArthur Foundation which funded the enhancement of the Ethics Codes Collection of the Center for the Study of Ethics in the Professions at the Illinois Institute of Technology and enabled new research on the current and future role of ethics codes within society.

v

Contents

1 An Introduction to the Societal Roles of Ethics Codes ������������������������    1 Kelly Laas, Michael Davis, and Elisabeth Hildt Part I Past, Present, and Future: The Role of Ethics Codes and Guidelines in Changing Fields 2 Research Ethics Guidelines for the Engineering Sciences and Computer and Information Sciences������������������������������   15 Philip Brey 3 Codes of Engineering Ethics: Recent Trends����������������������������������������   35 Michael Davis 4 Informed Consent in Digital Data Management����������������������������������   55 Elisabeth Hildt and Kelly Laas 5 Codes of Ethics and Research Integrity������������������������������������������������   83 Stjepan Ljudevit Marušić and Ana Marušić Part II Ethics Codes, Emerging Technologies, and International Viewpoints 6 The Significance of Professional Codes and Ethical Guidelines in Difficult Clinical Situations������������������������  101 Charlotte Wetterauer, Jan Schürmann, and Stella Reiter-Theil 7 Global AI Ethics Documents: What They Reveal About Motivations, Practices, and Policies��������������������������������������������  121 Daniel S. Schiff, Kelly Laas, Justin B. Biddle, and Jason Borenstein 8 Addressing Intelligent Systems and Ethical Design in the IEEE Code of Ethics ��������������������������������������������������������������������  145 Greg Adamson and Joseph Herkert

vii

viii

Contents

9 Technocracy, Public Optimism, and National Progress: Constructing Ethical Guidelines for Responsible Nano Research and Development in China ����������������������������������������������������  161 Qin Zhu 10 The Historical Process and Challenges of Medical Ethics Codes in China ����������������������������������������������������������������������������������������  175 Hengli Zhang, Siyu Sha, and Yuying Gao Part III Introduction: New Approaches to Ethics Codes: Changing Purposes, Differing Views 11 Mentions of Ethics Codes in Social Media: A Twitter Analysis ����������  193 Kelly Laas, Elisabeth Hildt, and Ying Wu 12 The Technology’s Fine; It’s the Code of Professional Ethics That Needs Changing������������������������������������������������������������������  211 Dennis Cooley 13 On Leaving and Receiving Traces: Thoughts on an Un-professional Code of Ethics����������������������������������  229 Adam Briggle 14 The Responsibility of Researchers and Engineers: Codes of Ethics for Emerging Technologies������������������������������������������  243 Armin Grunwald

About the Editors

Michael Davis  Senior Fellow, Center for the Study of Ethics in the Professions and Emeritus Professor of Philosophy, Illinois Institute of Technology, USA.  He has published 15 books—with another, Engineering as a Global Profession, in manuscript stage. He has also published nearly 250 articles and chapters, including the recent: “The Legality of the Nuremberg Trials: A Brief Lockean Memoir,” International Journal of Applied Philosophy (2018); “Temporal Limits on What Engineers Can Plan,” Science and Engineering Ethics (2019); and “Professionalism among Chinese Engineers: An Empirical Study” [with Lina Wei and Hangqing Cong], Science and Engineering Ethics (2019). Elisabeth  Hildt  Professor of Philosophy and Director, Center for the Study of Ethics in the Professions, Illinois Institute of Technology, USA; [email protected]. Her research focus is on bioethics, ethics of technology, research ethics, and science and technology studies. Research interests include research ethics, philosophical and ethical aspects of neuroscience, and artificial intelligence. Kelly Laas  Librarian and Ethics Instructor, Center for the Study of Ethics in the Professions, Illinois Institute of Technology, USA; [email protected]. Her research interests include the history and use of codes of ethics in professional fields, ethics education in STEM, research ethics, and integrating ethics into technical curricula.

ix

Chapter 1

An Introduction to the Societal Roles of Ethics Codes Kelly Laas, Michael Davis, and Elisabeth Hildt

Abstract  In this collected volume, we are interested in the roles of ethics codes and ethical guidelines in professions in which research and innovation play an important role and where emerging technologies bring about considerable, sometimes fast-­ paced change. Keywords  Codes of ethics · Professional ethics · Responsible conduct of research · Emerging technologies

In this collected volume, we are interested in the roles of ethics codes and ethical guidelines in professions in which research and innovation play an important role and where emerging technologies bring about considerable, sometimes fast-paced change. These can be broad technological trends, for examples the expanding relevance of artificial intelligence in most of life, or very specific contexts such as guidelines for clinical care related to cardi-pulmonary resuscitation. In all these contexts, innovations in science and technology are central, as are questions about how to deal with these innovations and developments, both at a professional and at a societal level. This volume explores three principal areas surrounding the roles of ethics codes and guidelines in modern professional and public life. The first section of this volume discusses the role of ethics codes and guidelines in changing disciplines; the second section looks at how codes shift in response to and help shape the position of emerging technologies in societies around the world. The third and concluding section considers the current and future role of ethics codes. Before we begin to explore how and why codes of ethics are an important way to study society, technological developments, and the changing role of professionals,

K. Laas (*) · M. Davis · E. Hildt Illininois Institute of Technology, Chicago, IL, USA e-mail: [email protected] © Springer Nature Switzerland AG 2022 K. Laas et al. (eds.), Codes of Ethics and Ethical Guidelines, The International Library of Ethics, Law and Technology 23, https://doi.org/10.1007/978-3-030-86201-5_1

1

2

K. Laas et al.

it is essential to gain a better understanding of why codes of ethics exist, how they are developed, and the different uses they have.

1.1  R  oles of Ethics Codes and Ethical Guidelines and Why They Are Important A code of ethics is an authoritative formulation of the (morally permissible) standards governing the conduct of members of a group, just because they are members of that group. A group consists of two or more moral agents. The authority of a code may derive from at least one of several sources: consent, custom, tradition, convenience, law, fairness, and so on; but to be a code of ethics, at least one source of its authority must be moral. Codes of ethics have at least six uses: First, and most important, a code of ethics can document, declare, or establish special standards of conduct where experience has shown that common sense, industrial tradition, or occupational custom is no longer adequate. Codes of ethics can change practice for the better. Second, a code of ethics, being authoritative, can help those new to the practice learn how to act. Codes can teach, much as dictionaries can teach its readers what words mean. Third, a code can remind those with even considerable experience, of what they might otherwise forget. Codes have a mnemonic function. Fourth, a code can provide a framework for settling disputes, even disputes among those with considerable experience. Fifth, a code can help those outside the group (“the public”) understand what may reasonably be expected of those in the group. Sixth, a code of ethics can justify discipline or legal liability. So, for example, once a profession has a formal code of ethics, courts can appeal to it when deciding what reasonable care in that profession is. The code’s higher (more demanding) standard may, and should, replace the standard common sense would otherwise set for the group. Attempts have been made to distinguish between (a) short, general, or uncontroversial codes (“code of ethics,” “statement of values,” or the like); and (b) longer, more practice oriented, more detailed, or more controversial codes (“code of conduct,” “guidelines,” “rules of practice,” or the like). While some such distinction may sometimes be useful in practice, it is hard to defend in theory. A “code of conduct” is as much a special standard as a “code of ethics” is except where the “code of ethics,” boiled down to a mere restatement of morality, is just “a moral code”. “Codes of conduct” are also generally as morally binding as “codes of ethics.” A code of ethics should be as long as it needs to be to do what it is supposed to do; the same holds for a “code of conduct.” Ethical guidelines, such as the research guidelines discussed by Philip Brey in Chap. 2 often apply to specific practices and groups of collaborators. In some cases, these ethics guidelines are set by professional associations, in others, these ethical guidelines are adopted by governmental organizations as a form of regulation; practitioners not following these guidelines can be subject to some form of enforcement,

1  An Introduction to the Societal Roles of Ethics Codes

3

such as the withdrawal of research funds or fines. Chap. 9 provides a discussion of the development of ethical guidelines for the responsible use of nanotechnologies in research and development and reflects on how these guidelines not only influence practitioners, but also public views on these emerging technologies. Guidelines need not only apply to research and scholarly publication. For example, the Association of National Advertisers has published their “Guidelines for Ethical Business Practice,” in 2020 that include specific, updated guidance for digital marketing and mobile marketing practices (ANA 2020). Guidelines like this, discussed in Chap. 4, “Informed Consent in Digital Data Management”, can help practitioners navigate current regulation, or are sometimes adopted by industry associations to pre-empt the passage of more restrictive or less informed regulations.

1.2  Ethics Codes Collection: An Introduction The Ethics Codes Collection of the Center for the Study of Ethics in the Professions at the Illinois Institute of Technology (http://ethicscodescollection.org/) is a unique resource, comprising a curated collection of over 3000 ethics codes and guidelines1 from over 1750 organizations. The ethics codes in the collection span over 220 years and include codes and guidelines from over 100 different disciplines and industry sectors. The collection serves as a dynamic global resource for informing ethical decision making in professional, entrepreneurial, scientific, technological, and other fields. It also serves to inform critical research into the advancement of ethical practices in a rapidly changing world. The Ethics Code Collection consists of both formal codes and sets of voluntary guidelines. The latter are distinct in character, and present interesting ethical questions in their own right. The collection began with the founding of the Center for the Study of Ethics in the Professions at the Illinois Institute of Technology in 1976 and has continually grown over the past 45 years. In 1996, the Ethics Center received a grant from the National Science Foundation to put its paper collection of codes and guidelines online. In 2016, the Ethics Center received a generous grant from the John D. and Catherine T. MacArthur Foundation to enhance the Ethics Codes Collection. The grant provided the resources to embark on an extensive improvement of the digital Ethics Codes Collection and enabled new research on the current and future roles of ethics codes in professional, business, and technological innovation. This collected volume is one result of the MacArthur grant. As interest in ethics codes continues to grow, so does the Ethics Codes Collection. The collection attempts to collect codes from professional associations, industry groups, government agencies and businesses over a large range of time, allowing scholars and practitioners to follow the development and growth of ethics reflection

1  Except where they are explicitly distinguished, any mention of “ethics codes” in this introduction should be understood to refer both to codes and guidelines.

4

K. Laas et al.

in different professions and fields. The collection also seeks to document the development of new professions and fields such as the rise of big data and artificial intelligence, as well as interest in the different ethical questions raised in these new contexts and growing technology use. Many of the articles in this volume serve as examples of the scholarship the Ethics Codes Collection supports. This includes in-depth analysis of the changes that codes of ethics take over time  – either through the lens of one professional association or over an entire profession as Michael Davis does in Chap. 3 for engineering. Scholars can also chart growing interest in ethical topics across multiple fields as Stjepan Ljudevit Marušić and Ana Marušić do in Chap. 5 in their analysis of mentions of research integrity in professional codes. The Ethics Codes Collection also provides models for professionals and partitioners writing or revising their own code of ethics, such as the process outlined by Greg Adamson and Joe Herkert of the revision of the IEEE code to include the ethical design of artificial intelligence and machine learning in Chap. 8. For over 45 years, the Ethics Codes collection has been used as a starting point for the development of new codes, and the scholars and librarians of the Center for the Study of Ethics in the Professions at the Illinois Institute of Technology have been active in supporting these efforts in everything from providing feedback on provisional drafts to serving as members of code development committees. This includes the development of the Software Engineering Code of Ethics by the Association for Computing Machinery and IEEE, the development of ethical guidelines on research integrity for Big 10 Universities, and for many other smaller businesses and professional associations. Though still limited in scope, the Ethics Codes Collection also seeks to collect international codes of ethics and guidelines to allow for the comparison of codes between countries. These comparisons can be extremely useful in several ways. As shown in part II of this volume, the comparison of ethics codes and guidelines internationally can showcase what ethical principles and issues different societies (at the professional, governmental, or public level) value by what the ethical codes stress – be it sustainable development, individual rights, or scientific progress. As international collaboration across borders continues to increase, it is important that professions, businesses, and governments continue to learn from one another and begin to develop international best practices to allow for the safe and ethical transfer of knowledge, information, and the benefits of scientific progress. The Ethics Codes Collection not only explores the ethical principles of different professions and organizations; the development of ethical guidelines also provides a fascinating window into scientific and professional cultures that exist in a society. What concerns do the authors of these guidelines address, whose opinion is sought in the development of the guidelines, and how guidelines are distributed and ultimately enforced provides a snapshot into the inner workings of the institution who developed the guidelines or code, and their envisioned place in society. The question of who has a seat at the table in the construction of codes of ethics and ethical guidelines is an interesting one that will come up in multiple chapters of this collection. A code of ethics written solely by a group of professionals will likely

1  An Introduction to the Societal Roles of Ethics Codes

5

be very different than one written by a group of legislators or ethicists, and what happens when members of the public have a seat on the authoring committee? Students and scholars of ethics codes quickly become attuned to when a code is solely inward facing and only addresses the concerns of members while leaving out concerns of clients, the public, and other stakeholder groups. Outward-facing codes can also be used as a defensive measure against criticism, but provide limited ethical guidance to group members. The collection policy for the Ethics Codes Collection has been kept quite broad on purpose  – it includes codes of ethics for major professional societies like the American Psychological Society and more obscure codes such as the Ethical Standards and Guidelines of the American College of Vedic Astrology. The only qualification is that it must be from a recognized organization – not from one individual  – and the authors must consent that it be made publicly available via the Ethics Codes Collection. As of this year, the Collection contains codes for how to perform scientific research in disaster areas, guidelines developed by indigenous communities governing how researchers should collaborate with members of their community, and guidance for the ethical use of polygraph tests (IAVCEI 1998; South African SAN Institute 2017; American Polygraphy Association 2015). The goal is to provide users with the widest possible access to professional codes and guidelines accompanied by scholarly and technological tools to allow for innovative and new research to be performed. We hope that this volume serves as inspiration to professionals, practitioners, and scholars for how codes of ethics can be studied, developed and used to promote more ethical practice in all areas the Ethics Codes Collection covers.

1.3  E  thics Codes as Sources for Research into the Professions Ethics codes represent ongoing conversations in the sciences, professions, and business about what standards, such as fairness and decency, should apply when the law does not. For instance, one can trace the emergence of attention to issues of sustainability and protection of the environment by following changes in architectural and engineering codes of ethics. The 1922, 1954, 1970, American Institute of Architects’ “Canons of Ethics” and “Standards of Professional Practice,” detail an architect’s ethical obligation to their clients and fellow professionals. Only in the 1977 code do we see the first references to architects’ duty to “conserve natural resources and the heritage of the past.” This concept continues to develop during the next few years (1979, 1996, 2007, 2012, and 2017), the concept being defined as the standard demands more. The American Anthropological Association (AAA) responded in the same manner when some members proposed amending the AAA Code of Ethics in 2007 to stop its members from working with the U.S. military as part of its Human Terrain

6

K. Laas et al.

System (HTS). HTS, which began recruiting anthropologists in 2007, sought to improve the military’s cultural awareness when deployed in complex social-cultural environments (Human Terrain System 2008). A statement by the AAA Executive board during that same year called the HTS an “unacceptable application of anthropological expertise” and asserted that “any anthropologist considering employment with HTS will have difficulty determining whether or not s/he will be able to follow the disciplinary Code of Ethics.” The statement goes on to discuss concerns about avoiding doing harm to the individuals and communities that anthropologists study when working with the military in a war zone, and the ability of anthropologists to gain voluntary consent from participants in conflict situations. The 2012 version of the AAA Code, section 4, “Weigh Competing Ethical Obligations Due to Collaborators and Affected Parties,” indirectly addressed these issues by going into details about how anthropologists must uphold the principle of “do no harm” in navigating these competing obligations to employers and funders, and their professional obligation to the communities they study and collaborate with. One of the most critical aspects of the Ethics Codes Collection is the ability to trace how ideas, principles, and concepts travel between different professions through their codes. Concepts from the field of medicine have traveled from the clinical setting into the realm of information sciences, and many ethics codes go on to cite other professional codes that have helped guide their development. An example of this is the citation of the Code of Ethics of the National Association of Social Workers (2008, 2017) in the Online Therapy Institute’s 2010 “Ethical Framework for the Use of Social Media by Mental Health Professionals”, and the 2019 version of the American Health Information Management Code of Ethics.

1.4  Ethics Codes and Emerging Technologies In addition, ethics codes and guidelines demonstrate how institutions and organizations address emerging situations and modern technologies. These can be technologies and practices new to a specific field of research, to emerging technologies that have the potential to change daily life for millions of people. This phenomenon can be seen in the fields of computer science, electrical engineering, biomedical research, and others. Around the world, engineers, entrepreneurs, and researchers in many fields, including biotechnology, information systems, robotics, artificial intelligence, media, and communications, often enter new territory with improvised guidelines for their work. In response to such guidelines and the emerging technologies provoking them, many professional associations are rewriting their codes of professional ethics or developing more specific guidelines to address innovation in their fields. A prominent example is the Code of Ethics and Professional Conduct of the Association for Computing Machinery (ACM) published in 2018. There are also other organizations, such as the International Organization for Standardization (ISO), doing something similar. In Europe, the movement for Responsible Research

1  An Introduction to the Societal Roles of Ethics Codes

7

and Innovation seems to be shaping how researchers and practitioners approach innovation. Changes in codes of professional ethics and more detailed guidelines represent an unparalleled window into the research, innovation, and emerging technologies they seek to regulate. They are crystallizations of ongoing conversations in scientific and professional fields about how justice, decency, safety, and the like should be realized in practice where the law is silent. They show how institutions and organizations are addressing modern technologies. In fields of rapid innovation, ethics generally precedes legal regulation and, even in fields that are relatively settled, it seldom confines itself to legally required acts. Ethics provide flexibility law does not.

1.5  About This Collected Volume This collected volume exemplifies the value of both codes of ethics and guidelines in general as well as the Ethics Codes Collection in particular. Many of the chapters in this book have drawn upon the Ethics Codes Collection to a great extent, allowing the authors to trace the development of ethics codes through a specific discipline or the prevalence of a principle, ethics topic, or even a particular phrase through the guidelines of hundreds of different organizations and disciplines. The book explores three principal areas surrounding the role of ethics codes in professions where research and development are important and in fields in which emerging technologies and other developments produce substantial, often rapid change. The first part of this volume “Past, Present, and Future: Ethics Codes and Guidelines in Changing Fields” explores central concepts and principles in ethics codes and their modification through time. The second part “Ethics Codes, Emerging Technologies and International Viewpoints” looks at how codes shift in response to and help shape the position of emerging technologies in societies around the world. The third and concluding part “Changing Purposes and Different Uses of Ethics Codes” explores some possible limitations of ethics codes as they are currently written and used and suggests how they might change to better both our professional and personal lives. The collected volume has been made possible through a generous grant from the John D. and Catherine T. MacArthur Foundation which funded the enhancement of the Ethics Codes Collection and enabled new research on the current and future role of ethics codes within society.

8

K. Laas et al.

References American Anthropological Association. 2012. Principles of Professional Responsibility. http:// ethicscodescollection.org/detail/6d92a99d-­a30a-­4379-­bf00-­24d297dc8cc0 Accessed 22 June 2020. American Anthropological Association Executive Board. 2007. Statement on the Human Terrain System project. 31 October. Available at: http://www.aaanet.org/issues/policy-­advocacy/ Statement-­on-­HTS.cfm Accessed 22 June 2020. American Chemical Society. 2019. The Chemical Professional’s Code of Conduct. https://www. acs.org/content/acs/en/careers/career-­services/ethics/the-­chemical-­professionals-­code-­of-­ conduct.html American Geophysical Union 1988. Guidelines to the Publication of Geophysical Research. http:// admin.ethicscodescollection.org/detail/93d15739-­dce0-­4247-­a59d-­05eb02c3a1b4. Accessed 21 May 2021. American Health Information Management Association. 2019. AHIMA Code of Ethics. http://ethicscodescollection.org/detail/317dc831-­f558-­4b49-­ae8e-­069314c331ae. Accessed 23 June 2020. American Institute of Architects. 1922. Canon of Ethics. http://ethicscodescollection.org/ detail/3a440034-­8f30-­4593-­a283-­b49ce357f704. Accessed 3 July 2020. ———. 1954. Standards of Professional Practice in Architecture. http://ethicscodescollection.org/ detail/92c68f61-­0233-­496b-­a661-­9833a92c0db0. Accessed 3 July 2020. ———. 1970. Standards of Professional Practice. Memo #308. ———. 1977. Standards of Professional Practice. http://ethicscodescollection.org/63822f05-­ ea3a-­42a0-­ae01-­5db4bb10c62c. Accessed 3 July 2020. ———. 1979. Code of Ethics and Professional Conduct. http://ethicscodescollection.org/detail/8 3ee37d5-­1bc3-­4c54-­812a-­67d198a3d925. Accessed 3 July 2020. ——— 1996. Code of Ethics and Professional Conduct. http://ethicscodescollection.org/detail/ ddeedada-­4379-­4f9d-­815d-­9fc4b6c88179. Accessed 3 July 2020. ———. 2007. Code of Ethics and Professional Conduct. http://ethicscodescollection.org/detail/ dbc192a7-­52dc-­4778-­aa58-­e3983564d1c8. Accessed 3 July 2020. ———. 2012. Code of Ethics and Professional Conduct. http://ethicscodescollection.org/ detail/63822f05-­ea3a-­42a0-­ae01-­5db4bb10c62c. Accessed 3 July 2020. ———. 2017. Code of Ethics and Professional Conduct. http://ethicscodescollection.org/ detail/332f972a-­88e2-­476d-­b560-­5f14de67173b. Accessed 3 July 2020. American Polygraphy Association. 2015. Code of Ethics. http://admin.ethicscodescollection.org/ detail/7fdf06cc1-­a09d-­4686-­8789-­51d9329df821. Accessed 20 May 2021. American Psychological Association. 1981. Guidelines for the Use of Animals in School Science Behavior Projects. http://admin.ethicscodescollection.org/detail/9ab19b20-­560a-­4e97-­ b464-­2b762dd4ea91. Accessed 21 May 2021. American Statistical Association. 2018. Ethical Guidelines for Statistical Practice. https://www. amstat.org/ASA/Your-­Career/Ethical-­Guidelines-­for-­Statistical-­Practice.aspx. Lasted viewed 21 May 2021. Association for Computing Machinery. 2018. ACM Code of Ethics and Professional Conduct. http://ethicscodescollection.org/detail/6d9afb47-­bccd-­4560-­8597-­6480ebee244a. Accessed 19 June 2020. Association of National Advertisers. 2020. Guidelines for Ethical Business Practice. https://www. ana.net/getfile/30491. Accessed 12 May 2021. Human Terrain System (HTS). 2008. Human Terrain team handbook. Fort Leavenworth, KS: HTS. International Association of Volcanology and Chemistry of the Earth’s Interior. 1998. Statement of Professional Conduct of Scientists During Volcanic Crises. http://admin.ethicscodescollection. org/detail/08163997-­7e64-­4b97-­a6f6-­d52c93e07387. Accessed 20 May 2021. Online Therapy Institute. 2010. Ethical framework for the use of social media by mental professional health professionals. http://admin.ethicscodescollection.org/detail/cc87020c-­13ce-­411e-­95 c2-­03f5f3b968b0. Accessed 18 June 2020.

1  An Introduction to the Societal Roles of Ethics Codes

9

South African San Institute. 2017. San Code of Research Ethics. http://admin.ethicscodescollection.org/detail/cca9947a-­f7d9-­49ef-­b629-­19a11aca88fe. Accessed 20 May 2021. Kelly Laas Librarian and Ethics Instructor, Center for the Study of Ethics in the Professions, Illinois Institute of Technology, USA; [email protected]. Her research interests include the history and use of codes of ethics in professional fields, ethics education in STEM, research ethics, and integrating ethics into technical curricula.  

Michael Davis Senior Fellow, Center for the Study of Ethics in the Professions and Emeritus Professor of Philosophy, Illinois Institute of Technology, USA. He has published 15 books—with another, Engineering as a Global Profession, in manuscript. He has also published nearly 250 articles and chapters, including the recent: “The Legality of the Nuremberg Trials: A Brief Lockean Memoir”, International Journal of Applied Philosophy (2018); “Temporal Limits on What Engineers Can Plan”, Science and Engineering Ethics (2019); and “Professionalism among Chinese Engineers: An Empirical Study” [with Lina Wei and Hangqing Cong], Science and Engineering Ethics (2019).  

Elisabeth Hildt Professor of Philosophy and Director, Center for the Study of Ethics in the Professions, Illinois Institute of Technology, USA; [email protected]. Her research focus is on bioethics, ethics of technology, research ethics and Science and Technology Studies. Research interests include research ethics, philosophical and ethical aspects of neuroscience, and artificial intelligence.  

Part I

Past, Present, and Future: The Role of Ethics Codes and Guidelines in Changing Fields

Along with supporting education, professional practice and informing the public about ethical principles important to a specific profession or organization, a good code of ethics must be periodically revised to help guide practitioners as new issues, technologies, or practices become prevalent. Some revisions address systemic issues that impact relationships between professionals and practitioners, such as the American Physical Society’s revision of its code in 2019 to better address bias, discrimination, and harassment as well as promote the fair treatment of colleagues, employees, and students (APS 2019). This specific revision stemmed from a seminal report from the National Academies, “Sexual Harassment of Women,” released in 2018, as well as several reports from women studying or working in several academic fields citing instances of sexual harassment and unfair treatment (Swartout 2018; Aycock et al. 2019). Other revisions speak to specific practices, such as how changing technologies impact issues of informed consent in the digital collection of data. In all the following chapters, there are examples of how leaders and policy-­ makers in diverse fields look to the ethical practices of other disciplines in shaping their response to new ethical challenges and questions. Focusing on the fields of research and development, and more specifically on the engineering, computer, and information sciences, the following chapters trace the development of new ethical principles to handle these emerging challenges and suggest better ways these guidelines could be utilized to educate upcoming generations of practitioners. The changing field of research is a prime example of how these developments take place. In the U.S., the requirement for institutional review boards to review research involving human subjects began with the passage of the National Research Act in 1974 and the release of the Belmont Report in 1976 (United States, HHS 2019). Formatted with medical and behavior research in mind, the ethical principles and standards that institutional review boards utilize are, in many cases, not a perfect fit for research emanating from the computer, information, social, or engineering sciences (Koepsell et al. 2014; Vitak et al. 2017). Research ethics as a field has changed enormously in the past 60  years, expanding to include numerous other fields in the engineering, physical, social, and technical disciplines. In the same way, Philip Brey’s Chap. 2, “Research Ethics Guidelines for the Engineering

12

I  Past, Present, and Future: The Role of Ethics Codes and Guidelines in Changing Fields

Science and Computer and Information Sciences” discusses changes in research ethics guidelines for non-medical fields, and the development of new ethical frameworks for the engineering sciences and the computer and information sciences by the European-Commission-funded project SATORI that takes into account significant differences in the ethical issues faced by these fields. Michael Davis’s piece, “Codes of Engineering Ethics: Recent Trends” (Chap. 3) traces recent developments in the field of engineering, namely the spread of codes internationally, increased attention to sustainability and the environment, a movement toward shorter codes of ethics, and the continued independence of engineering ethics from other fields such as medical ethics. Davis clarifies what is meant by the terms “engineering,” “code,” and “ethics” and takes the reader on a detailed tour of these four major developments in engineering ethics. In the end, the history of engineering codes is one of both content and format and showcases how exemplary codes need to change over time to stay useful to their professionals. In the next chapter, “Informed Consent in Digital Data Management” (Chap. 4) authors Elisabeth Hildt and Kelly Laas discuss how the model of informed consent, with its tradition in the biomedical field, can be used to inform different disciplines. The chapter looks at how codes of ethics from professional associations and guidelines from non-governmental organizations, businesses, and governmental organizations use specific elements of informed consent to govern digital data management among their members. In some cases, needed changes to ethics codes may still be pending. Around the world, scientific associations and government agencies that fund science have been shining a stronger light on the need for professional associations to integrate issues of research integrity into their education and professional practice. The U.S. National Academies’ 2017 report, “Fostering Integrity in Research,” explicity recommends that “In order to better align the realities of research with its values and ideals, all stakeholders in the research enterprise—researchers, research institutions, research sponsors, journals, and societies—should significantly improve and update their practices and policies to respond to the threats to research integrity identified in this report” (p. 210). Stjepan Ljudevit Marušić and Ana Marušić’s Chap. 5, “Codes of Ethics and Research Integrity” provides an overview of how different professional ethics codes handle issues of research integrity. They point out that the divergence between policy-­makers and researchers in what is meant by “research integrity” is likely to hinder the internalization of these principles into a researcher’s work and the research community at large. While professional codes of ethics and guidelines should play a significant role in helping educate researchers and reinforcing research integrity as a crucial part of professional ethics, most codes do not focus on research integrity and those that do tend to focus on a small number of research integrity issues. The authors suggest several strategies that could help strengthen how professional ethics codes discuss concepts of research integrity to help stakeholders see these essential principles and practices as a critical component in their everyday work.

I  Past, Present, and Future: The Role of Ethics Codes and Guidelines in Changing Fields

13

References American Physical Society. 2019. 19.1 Guidelines on Ethics. Sect. III. https://www.aps.org/policy/ statements/guidlinesethics.cfm. Aycock, Lauren M., Zahra Hazari, Eric Brewe, Kathryn B.H. Clancy, Theodore Hodapp, and Renee Michelle Goertzen. 2019. Sexual Harassment Reported by Undergraduate Female Physicists. Physical Review Physics Education Research 15:010121. Koepsell, David, Willem-Paul Brinkman, and Sylvia Pont. 2014. Human Research Ethics Committees in Technical Universities. Journal of Empirical Research on Human Research Ethics 9(3): 67–73. National Academies Press. 2018. Sexual Harassment of Women: Climate, Culture, and Consequences in Academic Sciences, Engineering, and Medicine. Washington, DC: National Academies Press. National Academies Press. 2017. Fostering Integrity in Science, 210. Washington, DC: National Academies Press. Swartout, Kevin M. 2018. University of Texas Climate Survey. “Consultant Report on the University of Texas System Campus Climate Survey.” In Sexual Harassment of Women: Climate, Culture, and Consequences in Academic Sciences, Engineering, and Medicine, eds. P.A. Johnson, S.E. Widnakk, and F.F. Benya. Washington, DC: The National Academies Press. United States, Department of Health and Human Services, Office for Human Research Protections. 2019. Regulations and Policies: The Belmont Report. Available at https://www.hhs.gov/ohrp/ regulations-­and-­policy/belmont-­report/index.html. Vitak, Jessica, Nicholas Proferes, Katie Shilton, and Zahra Ashktorab. 2017. Ethics Regulation in Social Computing Research: Examining the Role of Institutional Review Boards. Journal of Empirical Research on Human Research Ethics 12(5):372–382. https://doi. org/10.1177/1556264617725200.

Chapter 2

Research Ethics Guidelines for the Engineering Sciences and Computer and Information Sciences Philip Brey

Abstract  This chapter presents, discusses and defends research ethics guidelines for the engineering sciences and computer and information sciences. Only very recently has there been an effort to establish research ethics frameworks and ethics committees for these two fields. Arguments are presented concerning these developments, and a specific proposal is made for ethics guidelines for the engineering sciences and the computer and information sciences. It is argued that although there are shared issues and principles for research ethics across scientific fields, all scientific fields raise unique ethical issues that require special ethical principles and guidelines. Following this discussion, the historical development of professional ethics and research ethics in the engineering science and the computer and information sciences is discussed, and special guidelines for these fields are presented that were developed as part of a CEN (European Committee for Standardization) standard for research ethics within the European Commission-funded SATORI project on research ethics and ethics assessment. Keywords  Engineering sciences · Information sciences · Research ethics · Ethics assessment · Standardization

2.1  Introduction This chapter considers the development of research ethics guidelines for the engineering sciences and computer and information sciences. Codes of professional ethics have existed for these fields for a long time, but research ethics guidelines have been developed for them only very recently. This is likely because, until recently, research ethics committees for these fields were hardly in existence. In recent years, however, there has been a push to establish dedicated research ethics committees for P. Brey (*) University of Twente, Enschede, The Netherlands e-mail: [email protected] © Springer Nature Switzerland AG 2022 K. Laas et al. (eds.), Codes of Ethics and Ethical Guidelines, The International Library of Ethics, Law and Technology 23, https://doi.org/10.1007/978-3-030-86201-5_2

15

16

P. Brey

these fields, as has already happened long before that in biomedicine. More and more universities and research institutions are subjecting engineering and computer science research to ethics review, and more and more companies (e.g., Apple, Microsoft, Facebook) are also instituting research ethics committees in these fields. The distinction between professional ethics codes and research ethics guidelines is crucial in this chapter, which is why I will elaborate. Codes of professional ethics are guidelines for ethical behavior by individual professionals in various professional fields (Martin et al. 2010). They aim to regulate professional conduct so as to ensure it exhibits high ethical standards, professional quality, and trustworthiness. Codes of professional ethics are in place not only in professions that center around research and innovation but also in many other fields (e.g., for lawyers, nurses, and journalists). In professions in which research and innovation have an important place (in fields like computer science, engineering science, social science, and medicine), professional ethics codes to some extent cover expected professional behaviors in relation to research and innovation, but much of what they cover is more general. Professionals in these fields are likely to carry out research and innovation activities, but they may do a lot of other things as well, such as managing people, interacting with clients, teaching, writing a column for a newspaper, and sitting on a review committee in their company. A large part of professional ethics is typically devoted to general virtues and professional behaviors that define professional integrity, social responsibility, and professionalism in these fields. Research ethics guidelines typically do not apply to individual conduct but to research and innovation practices (Ipfhofen 2020). These practices often involve multiple researchers, and it is not their individual conduct that the guidelines are directed at but the overall way in which the research is conducted. These guidelines are typically not only used by researchers themselves but also by research ethics committees that ethically assess research. Research ethics committees typically do not do this during or after the research activity, but prior to it, on the basis of a research plan or proposal. They assess whether the research proposal adheres to relevant ethical standards or guidelines. These guidelines cover the research design and not individual conduct. A research ethics committee cannot determine on the basis of a research plan if individual scientists involved in the research will act honestly and with integrity. Research ethics has for long been virtually synonymous with medical research ethics, as is evidenced by the fact that until recently, the vast majority of research ethics committees were focused on biomedicine, and the vast majority of publications in research ethics focused on the medical field. This can be explained through an account of the nature and history of both medical and nonmedical fields. Medical research and medicine generally raise many ethical issues that are difficult to ignore since they involve many decisions that can have life and death consequences and are the subject of moral and religious disagreement. Medical research ethics gained a strong foothold after the Second World War when the Nuremberg trials led to the establishment of the Nuremberg Code, which sets out research ethics principles for human experimentation.

2  Research Ethics Guidelines for the Engineering Sciences and Computer…

17

The natural sciences never developed a strong tradition in research ethics, in part because, for the most part, they do not involve human or animal experimentation and because their impact on people and society is quite indirect. The social sciences similarly often only have an indirect impact on people and society, and only some of their research involves human experimentation (especially psychology). Finally, the engineering sciences do have an identifiable impact on society that needs to be accounted for since its designs can involve risks to health, life, and the environment. However, these risks have traditionally been mitigated through technical standards and ethics codes for individual engineers, rather than a tradition of research ethics. In recent years, this situation has changed, and there are now strong arguments to introduce a tradition of research ethics for many nonmedical fields as well. In the next section (Sect. 2.2), I will make this case, and I will moreover argue the ethical issues in these nonmedical fields are to some extent different from those in the medical field and require partially different ethical guidelines. Having established this, I will focus the discussion on appropriate ethical guidelines for the engineering sciences (Sect. 2.3), followed by an analysis of ethical guidelines for the computer and information sciences (Sect. 2.4). In a concluding section, I will take stock of the results of the analysis. The guidelines proposed in Sects. 2.3 and 2.4 are based on a CEN (European Committee for Standardization) Workshop Agreement (CWA), a standards document for research ethics committees that was developed in the SATORI project, a European Commission-funded project on the strengthening and harmonization of research ethics within the European Union.1

2.2  The Need for Research Ethics for Nonmedical Fields In the SATORI project, we performed a study in which we aimed to identify ethical principles that apply to all research fields and ones that apply to only some research fields (Shelley-Egan et al. 2015). I here report our key findings. First, let us consider ethical principles that apply to all research fields. An obvious first one is research integrity (or scientific integrity), but this is in large part, not a principle that can be assessed for in research ethics, as it is intended to regulate individual conduct and cannot easily be verified on the basis of research plans. It is difficult to determine based on a research plan whether the researchers involved will act honestly and collegially, will be transparent and scrupulous, and will comply with professional ethical codes. However, some aspects of research integrity may be tested for in research ethics, notably the avoidance of and openness about potential conflicts of interest. Such potential conflicts can easily be disclosed and examined as part of an ethics review process. An ethics committee could also assess whether research methods 1  SATORI (Stakeholders Acting Together on the Ethical Impact Assessment of Research and Innovation) received funding from the European Commission’s Seventh Framework Programme (FP7/2007-2013) under grant agreement n° 612231. It ran from 2014 to 2017. The website, with all project deliverables, can be found at http://satoriproject.eu/

18

P. Brey

and procedures exhibit qualities of carefulness, justification, reliability, transparency and openness, qualities that are often associated with research integrity. A second ethical principle that applies broadly across different fields is the protection of human research participants. Human research participants play a role in many types of research, including not only the medical sciences and social sciences, but also the engineering sciences and computer and information sciences, for example, in stakeholder engagement and the testing of new designs. Although specific principles for the protection of human research participants will be different for different fields, since they may involve different procedures, different risks, and different specific ethical issues, there will also be commonalities, such as the need to have informed consent, to respect dignity, autonomy, and personal integrity, to avoid risks of serious physical or psychological harm, and to have special provisions and protections for children and vulnerable groups. Social responsibility is a third universal ethical principle and implies that researchers anticipate and consider the potential consequences of the research project or activity for society, including potential future applications, and take appropriate remedial action to address potential negative impacts in their research design. Social responsibility also implies taking into account the concerns of stakeholders when planning and conducting research, communicating research results and potential societal implications of it to stakeholders, take potential misuse of research results into account, and to ensure that research carried out in lower- and middle-­ income countries involves benefit sharing and takes local needs and interests into account. A fourth general ethical principle is the protection of and respect for animals used in research. There is already considerable international agreement on this principle and, in particular, the three R approach based on replacement of animal experiments with other research where possible, reduction of the number of animals involved, and refinement, which means minimizing suffering (Russel and Burch 1959). Fifth is the protection of researchers and the research environment. This is a sometimes overlooked principle, but nevertheless important. It ensures the protection not only of human subjects and animals in research, but also of researchers themselves (their health and safety), the local community, and the local environment where experiments or fieldwork are carried out. A sixth principle, on which there is emerging consensus, is that of responsible data management.2 This involves the secure storage of research data, awareness of actual and potential data flows, protection of personal data, and open access to research data where possible. In relation to this principle, we reference the FAIR Guiding Principles for scientific data management and stewardship and the open access guidelines of the European Commission (European Commission 2017). Our

2  In our report, we refer to this as “protection and management of data and dissemination of research results”.

2  Research Ethics Guidelines for the Engineering Sciences and Computer…

19

personal data principles are in line with the European Union’s General Data Protection Regulation. I hereby present the six principles, with guidelines, in an abbreviated version. The full version can be found in CEN (2017). Research Integrity We include nine specific guidelines that concern the following issues: –– employing appropriate research methods –– avoiding bias, avoiding manipulations and distortions –– avoiding inclusion of data or observations that did not occur in data collection and experimentation –– ensuring autonomy and freedom of research –– avoiding conflict of interest –– avoiding the representation of the work of others as one’s own –– avoiding misrepresentation of one’s qualifications or accomplishments We point out that these requirements are normally not tested for by research ethics committees, except that conflicts of interest in research design can be more easily addressed than many of the other listed issues. For this reason, we include separate guidelines for avoidance of and openness about potential conflicts of interest. Social responsibility We include six specific guidelines: –– Anticipating potential negative consequences for society and taking remedial actions –– Consideration of potential negative impacts on individuals and groups, or the common good –– Promotion of sustainable development –– Acknowledgment of the economic and cultural value of local and traditional knowledge –– Avoidance of misuse of research materials and results –– Communicating with stakeholders and taking their interests into account We also include five special provisions for research involving low income or lower-middle income countries, including responsiveness to special needs, benefit sharing, involving local researchers in the research, minimizing the diversion of local (human) resources, and showing respect for local culture.

20

P. Brey

Protection of and respect for human research participants We include eight specific guidelines: –– Ensuring that research participants receive adequate information about the research –– Obtaining informed consent –– Treating human participants with respect for their dignity, autonomy, and personal integrity –– Ensure that research participants are not exposed to serious physical or psychological harm or strain –– Ensuring that risks to research participants are balanced by benefits to the participants or to society –– Ensuring that the privacy of research participants is protected –– Respecting cultural diversity and pluralism –– Ensuring adequate representation of society and social groups We also have special provisions for the protection of children, mentally disabled persons, and other vulnerable groups as research participants. Protection of and respect for animals used in research We include ten specific guidelines under the following headings: –– Respect for life (three Rs – replacement, reduction, refinement): consider replacement methods, reduction of number of animals used, and ways to minimize suffering –– Respect for the welfare of animals: ensure that potential benefits outweigh harm caused to animals, provide reasonable accommodation for the animals, and limit the use of animals with genetic diseases and behavioral disorders. –– Special provisions for the protection of non-human primates and wild animals and species: avoid the use of non-human primates, and restrict the use of animals captured in the wild. –– Special provisions for the protection of animals in low or lower-middle income countries: help in building local capacity for humane animal experimentation, and only use endangered species if the research contributes to their conservation. Protection and management of data and dissemination of research results We include 18 specific guidelines under the following headings: –– Management of data and open data: secure storage of data, awareness of data flows, and ensuring access to other researchers, interoperability and reusability.

2  Research Ethics Guidelines for the Engineering Sciences and Computer…

21

–– Protection of personal data: ensuring that collected personal data are needed for the research, obtaining or verifying informed consent, ensuring secure storage that takes place no longer than needed, ensuring that secondary use does not take place without informed consent or proper justification, ensuring regulated access for secondary use, consideration of access to personal information on third parties. –– Protection of personal data and ethics in Internet research: consider whether publicly available information is sensitive personal information, ensure anonymity and pseudonymity in data merging, guarantee proper consent when needed, inform participants in open online forums about systematic registration or reporting, endure anonymity when using information from Internet sources, do not disguise one’s identity when communicating with research subjects electronically. –– Dissemination of research results: make research results publicly available unless there are compelling reasons to do otherwise, strive for open access publications, make research results available to different audiences if possible. Protection of researchers and the research environment We include four specific guidelines: –– Protecting of researchers and staff from serious risk of physical or psychological harm or strain –– Taking special precautions to protect health and safety of (local) researchers and staff in low income or lower-middle income countries –– Avoiding harm to local communities –– Minimizing harm to the local environment

We found that apart from these general principles, there is a significant number of ethical principles and guidelines that do not broadly apply to different fields, and that only apply to one field, or a few fields. We hypothesize that this is not the contingent result of different traditions of research ethics, but because different scientific fields encounter different ethical issues in research, resulting in different ethical concerns. These different concerns stem from the fact that the subject matter of these fields, and the relation of researchers to this subject matter, is substantially different for each of them. I will demonstrate this by considering seven broad areas of science and the ethical issues that they raise. • Medical sciences: Medical ethics has traditionally centered around the doctor-­ patient relationship, which concerns standards of ethical behaviour of doctors towards their patients. In medical research ethics, this relationship has turned into the relationship between medical researcher and human subject. Ethical issues therefore concern those relating to the proper treatment of human subjects

22









P. Brey

(especially in clinical trials), involving medical principles such as autonomy, informed consent, beneficence, human dignity, and justice. Life sciences: The life sciences centre around the relationship of researchers to living biological systems, ecosystems, and the environment. Therefore, ethical issues concern the proper treatment of living beings, impacts on ecosystems, environmental impacts, and ethical principles include animal welfare, ecosystems integrity, sustainability, health and environmental risks, naturalness, and playing God. Natural sciences: The natural sciences have, at their core, the relation to truth: accurate measurement and representation of natural phenomena, including criteria like exactness, objectivity, verifiability, and reproducibility. Ethical issues, therefore, concern those that threaten this relation to truth, such as data manipulation, falsification, fabrication, unintentional bias, and conflict of interest. Corresponding ethical principles include scientific integrity, data integrity, freedom from bias, and honesty. While these principles are important in other fields as well, they have received the most attention in the natural sciences and are at the core of research ethics considerations in them. Social sciences: At the core of the social sciences is the relation between the researcher and human beings. However, this relation differs from that in the medical sciences, since it does not involve medical interventions but instead involves behavioural experimentation with and observation of humans, collection of personal information, and the representation of and intervention into the lives of individuals, social groups, and society at large. This leads to ethical issues e.g., the proper treatment of human subjects, privacy of data, and issues such as bias and unequal treatment (in theory and intervention). It involves ethical principles such as informed consent, equality, anonymity, confidentiality, privacy, fairness, non-discrimination, human rights, avoidance of cultural and social bias, and respect. In addition to having a focus on human beings, the social sciences also have a strong concern for proper methodology so as to ensure the quality and objectivity of research. Therefore, there is a focus on ethical issues and principles concerning data integrity, research integrity, freedom from methodological bias, objectivity, and others. Engineering sciences: At the core of the engineering sciences is the technological intervention into society: engineers develop technological concepts, artefacts, processes, and systems that directly or indirectly have an impact on people, the environment, and society at large. Ethical issues, therefore, concern impacts, especially those concerning health, well-being, and harms and benefits to society and the environment, as well as corresponding risks (that harmful impacts will occur), and responsibility for these impacts. Ethical principles include social responsibility (or responsibility to the public), well-being, impacts on rights, and the precautionary principle,3 sustainability, and the good of society, amongst others.

3  This is the principle that uncertainty about the risks involved in developing a new technology should not be used to justify inaction in addressing them.

2  Research Ethics Guidelines for the Engineering Sciences and Computer…

23

• Computer and information sciences: These are sciences that are concerned, in different ways, with the processing, storage, and dissemination of information. As a result, the focus is on the way in which these activities are enabled and concern issues and principles that include informational privacy, surveillance, information security, intellectual property, censorship, and freedom of information. • Humanities: The humanities, finally, have as their concern the study and expressive imagination of human culture and the human condition. This subject’s matter involves a special focus on interpretation, narrative, imagination, art, and the documentation, preservation, or augmentation of cultural heritage. Therefore, ethical issues concern the proper conduct of the interpretation and construction of narratives, the proper role of works of imagination and art in society and our evaluation of them, and our responsibilities in the preservation of cultural heritage. In addition, because the humanities may include human subjects in their research, they share ethical issues and principles concerning human subjects’ research with the social sciences. Alternatively, in the arts, the relation with audiences can raise ethical issues of responsibility. We conclude that because of these differences in ethical concerns in these seven areas of science, the ethical guidelines for them will also have to be substantially different. In our CEN Workshop Agreement (CWA) on ethics committees (CEN 2017), we have worked out specific guidelines for these seven fields. These guidelines were the result of consultation with a large number of stakeholders. In what follows, I will now turn to the ethical guidelines that the CWA resulted in for the engineering sciences (Sect. 2.3) and computer and information sciences (Sect. 2.4).

2.3  Guidelines for the Engineering Sciences While professional engineering societies already started adopting professional ethics codes in the late nineteenth century, it is only very recently that ethics guidelines for engineering research have been developed. While professional ethics in engineering is well developed, research ethics has not been. Many engineering programs at universities include courses or modules on engineering ethics, which pays attention to ethical issues in engineering, but the focus is on professional ethics rather than research ethics. Engineering ethics is concerned with the engineer’s consideration for the public, clients, employers, and the profession, and focuses on responsibilities for the health, safety, and welfare of the public, on sustainability and care for the environment, and on standards of professional integrity. Some engineering ethics textbooks cover ethical issues in engineering research, but most codes of ethics do not make specific reference to research.4 4  See Brey and Jansen (2015) for more information on the current state of affairs of engineering ethics and engineering research ethics.

24

P. Brey

In understanding this state of affairs, it should be taken into account that only part of engineering work involves research. Much of it is concerned with innovation and technology development, which may include or be preceded by research activities, but which includes a creative process of making and designing that is different from scientific research. The engineering sciences highlight the fact that the concept of research ethics may be too limited for some scientific fields. Next to the engineering sciences, the computer and information sciences also have a focus on developing systems and products rather than scientific discovery. So because of the nature of these two fields, it might be more appropriate to consider a realm of research and innovation ethics instead of just research ethics. Research ethics committees for these sciences are actually research and innovation committees, as they assess not just research but also innovation and technology development plans. The reason that professional ethics codes for engineers may not be enough, and we need research ethics as well, is that engineering projects are typically carried out by teams of engineers and involve ethical issues that concern the overall research and innovation design. This means that addressing such issues is a collective responsibility, and they cannot be addressed through individual action alone, but rather require a comprehensive accounting for ethical issues throughout the research and development process. In recent years, there has been an interest in research ethics for the engineering sciences, and one sees the emergence of research ethics committees for the engineering sciences at a growing number of institutions.5 While a full explanation for this development is beyond the scope of this chapter, we can point to two developments that may have been of influence here. First, ethical concern with the pervasive role of technology in society has resulted since the 1990s in a new field of technology ethics (Hansson 2017; Sandler 2013), which is not a form of professional ethics but rather a form of applied ethics that concerns itself with social-ethical problems surrounding technology. Ethics of technology is related to the field of technology assessment, but has a specific focus on ethical issues. It focuses on the ethical issues that society in general has to deal with regarding the introduction and use of technology in society. Examples of such issues include whether the risks of new nanotechnologies are morally acceptable, whether cloning should be allowed, and to what extent Internet users are entitled to privacy. Technology ethics potentially provides a basis for the development of research ethics for the engineering sciences, since it identifies ethical issues with the development of technology and its impact on society. It has even identified how technological design can be morally biased and includes value choices and how engineering design can be carried out in such a way that the resulting systems promote desired values and reduce undesirable biases (Van den Hoven et al. 2015; Friedman and Hendry 2019). A second development that has stimulated research ethics for the engineering sciences has been the policy of the European Commission to have ethics review for

5  See Koepsell et  al. (2014), who, however, focus on human research ethics at technical universities.

2  Research Ethics Guidelines for the Engineering Sciences and Computer…

25

all research in the European Union funded within research framework programmes. These are programmes with budgets of many billions of euros per year, that mostly fund research in the engineering sciences and computer and information sciences. Including ethics review, and often requiring research plans that are assessed by local ethics committees as well, has stimulated the emergence of local ethics committees in the engineering sciences and computer and information sciences in the European Union. In spite of these developments, there is still hardly a tradition of research ethics for the engineering sciences. In fact, the only example of research ethics guidelines for the engineering sciences known to us before we developed our own in the SATORI project were the Guidelines for Research Ethics in Science and Technology by The Norwegian National Committee for Research Ethics in Science and Technology (2016). However, these guidelines had the limitation of covering both science and technology, thereby hardly covering issues that are specific to the engineering sciences. Moreover, they blend engineering ethics guidelines with research ethics guidelines, while in our view, these are best kept separate. In what follows, I present the additional ethical principles for the engineering sciences developed in the CWA that we see as supplementing the general ethical principles of respect for scientific integrity, social responsibility, respect for human subjects, respect for animals, protection of researchers, and the research environment, and responsible data management that we earlier identified to apply to all fields. The principles are either additional provisions for these six general principles, or substantially new principles. Starting with additional provisions, we identified a large number of specific social responsibilities involved in engineering that do not apply to (most) other fields, and also identified some additional principles regarding the protection of animals and of researchers and the research environment:

Social responsibility (additional provisions) Respect for individual rights and liberties: • Ensure the technology does not pose inherent risks to individual freedom, autonomy, authenticity, or identity; or to individual privacy, human dignity, or human bodily integrity. Protection and promotion of well-being and the common good: • Consider how the technology could potentially harm or benefit the wellbeing and interests of individuals and groups in society; • Consider how the technology could help to protect and promote important social institutions and structures, democracy, and important aspects of culture and cultural diversity.

26

P. Brey

Protection and promotion of justice and equality: • Consider how the technology could harbour biases or negative effects that disproportionally impact people in terms of age, gender, sexual orientation, social class, income, race, ethnicity, religion, culture, or disability; • Consider how the technology could contribute to the reduction of unjust biases, stigmatization, or discrimination in society in terms of age, gender, sexual orientation, social class, race, ethnicity, religion, culture or disability; • Consider how the technology could widen or help narrow social inequalities in terms of the distribution of opportunities, powers, and capabilities, civil and political rights, economic resources, income, risks or hazards; • Consider how the technology could harm or benefit vulnerable, disadvantaged, or underrepresented individuals, groups, and communities in society or individuals, groups and communities in low-income and lower-middle income countries; • Consider how the technology could harm or benefit future generations. Protection of animals (additional provisions for technology that is intended for use around animals) • Ensure that the technology does not pose any unnecessary risks of harm to animals; • Respect the characteristics, needs, and behaviours of the animal species involved. Protection of researchers and the research environment (additional provisions) • Take special precautions to ensure that researchers and staff involved in conducting the research are not exposed to serious physical harm or strain as a result of working with harmful biological, chemical, radiological, nuclear, or explosive materials.

These guidelines were included because of the specific nature of technology: technology results in products and systems that are used in society, and as such can affect, sometimes reliably, the realization of individual rights and liberties, justice and equality, well-being, and the common good. The ethical guidelines ask technology developers to anticipate and account for this in their work. Technological products can also cause harm to animals in the context of use, separately from potential harm from animals in research, and technological research also requires special precautions for researchers and the research environments that are not needed for most other types of research.

2  Research Ethics Guidelines for the Engineering Sciences and Computer…

27

We also identified several principles that apply specifically to engineering science: avoidance of risks of harm to the environment, dual use of engineering research and technology, and avoidance of misuse of research materials and results: Avoidance of risks of harm to the environment Protection of the environment: • Anticipate and assess potential risks of harm to the (urbanised or natural) environment as a result of the applications or uses of the technology, and take appropriate measures to address them during the innovation process; • Consider the possibility of unforeseen or long-term environmental effects of the technology; • Take special precautions to prevent environmental harms caused by the use of biological, chemical, radiological, nuclear, or explosive materials; • Promote a clear understanding of the actions required to restore the environment once it has been disturbed as a result of the technology. Promotion of environmental sustainability: • Optimize the technology for effective and cost-efficient resource recovery (recycling); • Take responsibility to search for technological solutions that lower the potential consumption of raw materials and energy; • Take responsibility to search for technological solutions that lower the production of environmentally harmful wastes and lessen environmental pollution; • Be conscious of the interdependence between ecosystems and the importance of bio-diversity. Social environmental responsibility: • Be conscious of, and engaged with, any (local) societal concerns and interests regarding the ways in which the technology could affect the environment. Avoidance of public health and safety risks. • Ensure that the technology that is developed, in terms both of the production and the societal use of any goods based on it, does not pose inherent direct or long-term risks of harm to public health and safety. Dual use of engineering research and technology • Consider whether the technology could have military applications; • Consider whether the technology could contribute to the proliferation of weapons of mass destruction;

28

P. Brey

• Consult proper authorities before publishing and adhere to relevant national and supra-national regulations if the technology has significant military applications or if it contributes significantly to the proliferation of weapons of mass destruction. Avoidance of misuse of research materials and results • Take special precautions to prevent or counter the effects of potential misuse of security-sensitive chemical, radiological or nuclear materials and knowledge (e.g. the appointment of a security advisor, limiting dissemination, classification, training for staff).

These guidelines were included for the following reasons: Avoidance of risks of harm to the environment was included because technology development results in technological solutions that can either harm or benefit the environment. The engineering sciences differ in this respect from most other types of research. Guidelines for dual use were included because these are also specific to technology and engineering. The same applies to avoidance of misuse: technological products can often be misused in harmful ways, and designers can often anticipate this and take special precautions to prevent misuse. The complete set of guidelines for the engineering sciences, combining both general guidelines and engineering-specific guidelines, can be found on the SATORI website at http://satoriproject.eu/deliverables/.

2.4  Guidelines for the Computer and Information Sciences In the computer and information sciences, we see a similar development to that of the engineering sciences. Professional ethics for computer scientists has been around since the field was still young, and the first code of ethics for computer scientists was developed in 1973 by the Association for Computing Machinery in the United States. Research ethics guidelines and committees have been in existence, however, only since very recently (Søraker and Brey 2015). Recent efforts may have been stimulated by the emergence, since the 1980s, of the field of computer ethics (Johnson 1985; Tavani 2015), which, like the ethics of technology did for the engineering sciences, addresses ethical issues relating to the role of computer systems in society. It may have also been stimulated by the requirement of ethics review for EU-funded research in the European Union, as happened with the engineering sciences. Since the late 2010s, moreover, there has been a strong interest in both the tech industry and in policy circles for ethics of artificial intelligence, which is believed by many to raise important ethical issues for society. Many guidelines have been generated in recent years for AI, and this interest has also stimulated the

2  Research Ethics Guidelines for the Engineering Sciences and Computer…

29

formation of research ethics committees for AI specifically, or for computer science and information technology generally, at tech companies like Google, Apple, and Facebook, and at universities (Hagendorff, 2020). In addition, there has been a significant interest since at least the 1990s to address privacy issues with information technology, and various guidelines have been developed to specifically address issues of privacy and data protection – though in most cases, these are not aimed at the development of information technology but at its use (e.g., European Commission 2018; Wright 2012). While plenty of guidelines have been developed that specifically address AI and privacy, very few have been developed that address the computer and information sciences in general. When we developed our own in the SATORI project, as part of the CEN CWA, only one clear example was known to us, which were ethics guidelines for information and communication technology research for the U.S.  Department of Homeland Security: the Menlo report and its companion (Dittrich and Kenneally, 2012; Dittrich et al. 2013). These guidelines have a focus, however, on human subjects research only. For the reasons given in Sect. 2.2, we believe that the issues in computer science are broader in scope and cannot easily be captured by traditional human subjects research frameworks. I now present the additional ethical principles for the computer and information sciences that were developed in the CEN CWA and that we see as supplementing the general ethical principles identified earlier. Starting with additional provisions to general principles, we identified a large number of specific social responsibilities involved in computer and information science and specific additions for the responsible data management in relation to privacy and protection of personal information: Social responsibility (additional provisions) Respect for freedom of expression: • Ensure that new research concepts and innovations do not pose unjustified inherent risks to the freedom of individuals to express themselves through the publication and dissemination of information, or to their freedom of access to information; • If research or innovation involves the use of censorship methods, strike an appropriate balance between the need for content control and the right of individuals to express themselves freely. Respect for intellectual property: • Ensure that new research concepts and innovations do not pose unjustified inherent risks to the intellectual property rights of individuals or organisations; • Avoid research that could generate copyright issues, such as research involving peer-to-peer networking or file sharing and distribution.

30

P. Brey

Respect for other individual rights and liberties: • Ensure that new research concepts and innovations do not pose inherent risks to autonomy, authenticity, or identity. In particular, ensure that information systems do not unnecessarily or unjustifiably take away control from users by limiting their choices or making choices for them that they would prefer to make themselves; • Ensure that decisions made by information systems that have significant social impact take into account the rights, values, and interests of stakeholders, including users, and make efforts to ensure that the reasons for decisions made by information systems can be retrieved so as to make the systems accountable; • Take into account the issue of how responsibilities and liabilities are assigned between humans and machines when information systems are involved in decision-making. Avoidance of harms to justice and equality: • Consider how new research concepts and innovations could widen or narrow social inequalities in terms of the distribution of opportunities, powers, and capabilities, civil and political rights, economic resources, income, risks or hazards; • Consider how new research concepts and innovations could harbour or counter unjust bias in terms of age, gender, sexual orientation, social class, race, ethnicity, religion, or disability; • Consider how new research concepts and innovations could harm or promote the interests of vulnerable, disadvantaged, or underrepresented groups and communities in society, including those in low income and lower-middle income countries. Promotion of well-being and the common good: • Consider how the research or innovation activity could harm or promote the general well-being of individuals and groups in society (e.g., effects on the quality of work or quality of life); • Consider how the research or innovation activity could harm or promote the social skills and behaviour of individuals, and how it could harm or promote the learning or exercising of important virtues, such as patience and empathy; • Consider whether and how the research or innovation activity could harm or promote important social institutions and structures, democracy, and important aspects of culture and cultural diversity.

2  Research Ethics Guidelines for the Engineering Sciences and Computer…

31

Promotion of environmental sustainability: • Optimize technologies for effective and cost-efficient resource use (including raw materials and energy), for resource recovery (recycling), and for lowering the production of environmentally harmful wastes and environmental pollution. Protection and management of data and dissemination of research results (additional provisions) Protection of personal data. • Ensure that new research concepts and innovations do not pose any unjustified inherent risks to the right of individuals to control the disclosure of their personal data; • If research concepts and innovations involve the combination of multiple data sources, carefully consider the effects on (informational) privacy; • If research concepts and innovations involve the development of capabilities for, or the use of, data surveillance or human subject monitoring or surveillance, then invoke the requirement for informed consent, if appropriate. Strike an appropriate balance between the need to monitor and control personal information and the right of individuals to (informational) privacy and other human rights.

These guidelines were included either because they apply to all technology, and as such, correspond with specific guidelines for the engineering sciences, or because of the specific nature of information technology. For this reason, we included guidelines that address how computer science research and development (R&D) results in products and systems that are used in society and affect the realization of individual rights and liberties, justice and equality, well-being, and the common good. We also included privacy and data protection guidelines that do not so much pertain to the use of information technology, as our general guidelines for responsible data management do, but to its development. We also identified two additional principles for the computer and information sciences: avoidance of security risks and dual use (which contains several guidelines that are identical to those for the engineering sciences).

Avoidance of security risks • Ensure that new research concepts and innovations offer reasonable protection against any potential unauthorized disclosure, manipulation, or deletion of information and against potential denial of service attacks, e.g., protection against hacking, cracking, cyber vandalism, software piracy, computer fraud, ransom attacks, disruption of service;

32

P. Brey

• Ensure that new research concepts and innovations, by themselves or through their use in a system, do not pose inherent direct or long-term risks of harm to public health and safety, e.g., information and communications technology (ICT) innovations used in healthcare, ICT innovations used in the monitoring and control of public infrastructure, ICT innovations that could lead to addiction; • Do not engage in research that involves attempts to make unauthorized access to telephone systems, computer networks, databases or other forms of ICT; such research is illegal and unethical, regardless of motivation; • Treat with extreme caution the dissemination of research involving the identification of undiscovered security weaknesses in existing systems; • Avoid practical experiments with computer viruses or perform them in a controlled environment, and exercise extreme caution in the dissemination of the results of paper-based (theoretical) computer virus experiments; • Carry out any experiments in breach security on designated, standalone (offline) computers or on designated isolated networks of computers. Dual use of computer and information sciences research and innovations • Consider whether new research concepts and innovations could have military applications; • Consider whether new research concepts and innovations could contribute to the proliferation of weapons of mass destruction; • Consult proper authorities before publishing and adhere to relevant national and supra-national regulations if a technology has significant military applications or if it contributes significantly to the proliferation of weapons of mass destruction. Even if publication is allowed, find a proper balance between security and freedom of publication.

These guidelines were included for the following reasons: Security risks are specific risks that apply to computer systems that could cause significant harm, as well as violate individual rights. It is therefore proper to include guidelines for addressing such risks in computer science R&D. As in the engineering sciences, there are sometimes dual use issues where some civilian projects in information technology can be used for military purposes. We, therefore, include guidelines for dual use here as well. The complete set of guidelines for the computer and information sciences, combining both general guidelines and engineering-specific guidelines, can be found on the SATORI website at http://satoriproject.eu/deliverables/.

2  Research Ethics Guidelines for the Engineering Sciences and Computer…

33

2.5  Conclusion In this chapter, I discussed research ethics guidelines for the engineering sciences and computer and information sciences. Only very recently has there been an effort to establish research ethics frameworks and ethics committees for these two fields. I presented arguments in support of these developments. It was argued that although there are shared issues and principles for research ethics across scientific fields, scientific fields raise unique ethical issues that require special ethical principles and guidelines. Following this discussion, I discussed the historical development of professional ethics and research ethics in the engineering science and the computer and information sciences, and I presented and discussed the special guidelines for these fields that were developed as part of a CEN CWA standard for research ethics within the SATORI project. It is my hope that the developments that I sketched will continue and that distinct research ethics frameworks and committees for these two fields will be in place in many countries in the future.

References Brey, Philip, and Philip Jansen. 2015. Ethics Assessment in Different Fields: Engineering Sciences. Annex 2.b.1 to SATORI Deliverable D1.1. EU FP7 Project, 27. http://satoriproject. eu/media/2.b-­Engineering.pdf. CEN. 2017. Ethics Assessment for Research and Innovation  – Part 1: Ethics Committee. CEN Workshop Agreement, 17145-1:2017 E. http://satoriproject.eu/publications/cwa-­part-­1/. Dittrich, David, and Erin Kenneally. 2012. The Menlo Report: Ethical Principles Guiding Information and Communication Technology Research. Tech. Rep.. Washington, DC: U.S. Department of Homeland Security, August. Dittrich, David, Erin Kenneally, and Michael Bailey. 2013. Applying ethical principles to information and communication technology research: A companion to the Menlo report. Tech. Rep. Washington, DC: U.S. Department of Homeland Security. https://doi.org/10.2139/ ssrn.2342036. European Commission. 2017. Guidelines to the Rules on Open Access to Scientific Publications and Open Access to Research Data in Horizon 2020. Version 3.2. https://ec.europa.eu/research/ participants/data/ref/h2020/grants_manual/hi/oa_pilot/h2020-­hi-­oa-­pilot-­guide_en.pdf. ———. 2018. Ethics and Data Protection. https://ec.europa.eu/research/participants/data/ref/ h2020/grants_manual/hi/ethics/h2020_hi_ethics-­data-­protection_en.pdf. Friedman, Batya, and David G. Hendry. 2019. Value Sensitive Design: Shaping Technology with Moral Imagination. Cambridge, MA: The MIT Press. Hagendorff, Thilo. 2020. The Ethics of AI Ethics: An Evaluation of Guidelines. Minds and Machines. https://doi.org/10.1007/s11023-­020-­09517-­8. Hansson, Sven O., ed. 2017. The Ethics of Technology: Methods and Approaches. London: Rowman & Littlefield International. Ipfhofen, Ron, ed. 2020. Handbook of Research Ethics and Scientific Integrity. Cham: Springer. Johnson, Deborah G. 1985. Computer Ethics. Englewood Cliffs: Prentice-Hall. Koepsell, David, Willem-Paul Brinkman, and Sylvia Pont. 2014. Human Research Ethics Committees in Technical Universities. Journal of Empirical Research on Human Research Ethics 9 (3): 67–73.

34

P. Brey

Martin, Clancy, Wayne Vaught, and Robert C. Solomon, eds. 2010. Ethics across the Professions: a Reader for Professional Ethics. New York: Oxford University Press. Russel, William, and Rex Burch. 1959. The Principles of Humane Experimental Technique. London: Methuen. Sandler, Ronald, ed. 2013. Ethics and Emerging Technologies. Palgrave Macmillan, London. Shelley-Egan, Clare, Philip Brey, Rowenda Rodrigues, David Douglas, Agata Gurzawska, Lise Bitsch, David Wright, and Kush Wadhwa. 2015. Ethical Assessment of Research and Innovation: A Comparative Analysis of Practices and Institutions in the EU and Selected Other Countries. Deliverable D1.1 for SATORI EU FP7 project: 113. http://satoriproject.eu/media/ D1.1_Ethical-­assessment-­of-­RI_a-­comparative-­analysis-­1.pdf. Søraker, Johnny, and Philip Brey. 2015. Ethics Assessment in Different Fields: Information Technology. Annex 2.b.1 to SATORI Deliverable D1.1 EU FP7 Project: 19. http://satoriproject. eu/media/2.b.1-­Information-­technology.pdf. Tavani, Herman. 2015. Ethics and Technology: Controversies, Questions, and Strategies for Ethical Computing. 5th ed. Hoboken: Wiley. The Norwegian National Committee for Research Ethics in Science and Technology. 2016. Guidelines for Research Ethics in Science and Technology. 2nd ed https://www.etikkom.no/ globalassets/documents/english-­publications/60126_fek_guidelines_nent_digital.pdf. Van den Hoven, Jeroen, Pieter Vermaas, and Ibo Van de Poel, eds. 2015. Handbook of Ethics, Values, and Technological Design: Sources, Theory, Values and Application Domains. Dordrecht: Springer. Wright, David. 2012. The State of the Art in Privacy Impact Assessment. Computer Law & Security Review 28 (1): 54–56. Philip Brey  Professor in philosophy and ethics of technology, University of Twente, The Netherlands; [email protected]. His research interests include the ethics of technology, with particular attention to new and emerging technologies, including information and communication technologies, AI and robotics, biomedical and sustainable technologies, and implications for policy and design.

Chapter 3

Codes of Engineering Ethics: Recent Trends Michael Davis

Abstract  The Ethics Codes Collection of the Center for the Study of Ethics in the Professions at the Illinois Institute of Technology (http://ethics.iit.edu/ecodes/) differs in at least three ways from other collections of codes now available online. First, it is by far the largest, with over 2500 codes (from more than 1500 organizations). Second, it is searchable by discipline and topic, as well as by keywords such as “public,” “environment,” and “sustain.” Third, and most important, unlike most collections, it does not replace old codes with new. Instead, it is a historical database of codes, adding new versions of codes alongside the old, thereby creating a convenient resource for anyone who wants to study the history of codes—as I propose to do here. My subject is not all codes, though, just codes of engineering ethics. Codes of engineering ethics are numerous enough and varied enough for the purpose of this chapter: to demonstrate one use for the Ethics Codes Collection. Keywords  Engineering ethics · Codes of ethics · Medical ethics

3.1  Introduction The Ethics Codes Collection of the Center for the Study of Ethics in the Professions at the Illinois Institute of Technology (http://ethics.iit.edu/ecodes/) differs in at least three ways from other collections of codes now available online. First, it is by far the largest, with over 2500 “codes” (“guidelines,” “canons,” “principles,” “standards,” “rules,”,“vows,” or the like) from more than 1500 organizations. Second, it is searchable by discipline and topic, as well as by keywords such as “public,” “environment,” and “sustain.” Third, and most important, unlike most collections, it does not replace old codes with new. Instead, it is a historical database of codes, adding new versions of codes alongside the old, thereby creating a convenient resource for M. Davis (*) Illinois Institute of Technology, Chicago, IL, USA e-mail: [email protected] © Springer Nature Switzerland AG 2022 K. Laas et al. (eds.), Codes of Ethics and Ethical Guidelines, The International Library of Ethics, Law and Technology 23, https://doi.org/10.1007/978-3-030-86201-5_3

35

36

M. Davis

anyone who wants to study the history of codes—as I propose to do here. My subject is not all codes, though, just codes of engineering ethics. Codes of engineering ethics are numerous enough and varied enough for the purpose of this chapter: to demonstrate one use for the Ethics Codes Collection. As I look over the codes of engineering ethics in the Collection, I see at least four trends. Two are not surprising: One is the increasing number of codes coming from non-English-speaking countries. Adopting a code of ethics seems to have become a global practice for engineers. The second trend began in the 1990s. Engineering codes increasingly include a reference to “sustainability,” “sustainable development,” or at least explicit concern for the welfare of future generations.1 The reference to sustainability may or may not replace an older reference to “the environment.” The third trend is surprising. There seems to be a movement toward shorter “codes of ethics” (so-called), usually combined with much longer “guidelines” or “rules of practice,” producing a segmented document or pair of documents somewhat longer than the older “long code” but not much over 2000 words. The fourth trend, really a non-trend, is the continuing independence from medical ethics of developments in engineering ethics. This is surprising only because medical ethics is a field so much larger than engineering ethics, so much better funded, and with so many more journals and “ethicists.” Its gravitational force should be enormous but is not—for codes of engineering ethics, at least. What I propose to do here is briefly provide evidence from the Ethics Codes Collection for the four trends that I have just identified and then seek to advance the debate concerning what sort of code of ethics is best for engineers (if only one is). But before I do that, I should clarify what I mean by a “code of engineering ethics.”

3.2  Clarification “Engineering” (like “engineer”) is an honorific suggesting both rigor and success. Like other honorifics, “engineering” is sometimes used where it should not be, for example, in such expressions as “social engineering,” “financial engineering,” and “genetic engineering.” The fields so named—even if technologies or applied sciences—are no more engineering than architecture or computer science is. They call themselves “engineering” to invite the trust that engineering has earned but they have not. What then is “engineering”? Unfortunately, there is no wholly satisfactory verbal formula answering that question. Engineering is, of course, what engineers do, but 1  See, for example, the Code of Ethics of the American Society of Civil Engineers (1996), which appears to be the first American code to include a provision something like this: “[Canon 1.] Engineers shall hold paramount the safety, health, and welfare of the public and shall strive to comply with the principles of sustainable development in the performance of their professional duties.” (Italics mine.)

3  Codes of Engineering Ethics: Recent Trends

37

engineers do many things. They design bridges, chemical plants, harbors, office towers, electrical grids, materials, microchips, and other complex systems. They manage the installation, construction, or manufacture of those complex systems, oversee their operation and maintenance, and even plan their disposal. They also do research, teach, and otherwise work as academics. Though members of other occupations may do such things too (for example, architects also “design buildings”), engineers, as such, differ from these other “technologists” in how they do what they do. Engineering is a certain way of doing certain things, a distinct discipline. That discipline has a history, a story connecting teachers to students in an unbroken chain beginning at a specific place and time.2 It is that history or, more exactly, the people living it, rather than any verbal formula, that defines engineering. The English word “engineer” comes from French génie. The first people to be called “engineers” were soldiers of the Renaissance associated with “engines of war” (cannons, siege towers, and the like). They were not engineers in the sense that concerns us. They were engineers only in the sense that they operated (or otherwise worked with) “engines” (that is, complex devices for some useful purpose). Their descendants are the train drivers, janitors, and the like whom we call “railroad engineers,” “building engineers,” “stationary boiler engineers,” and so on. In 1676, the French organized their (military) “engineers” into a special military unit, the corps du génie. The corps consisted of officers who would be sent out when needed to direct the sapping of walls, placement of artillery, construction of fortifications, and so on. These engineers relied on ordinary soldiers for most of the labor. Within two decades, this special unit was known all over Europe for unusual achievements in military construction and siegecraft. When another country borrowed the French word “engineering” (or, rather, some variant of the French génie) for use in its own army, it was for the sort of activity officieurs du génie engaged in. They borrowed the French word because their own language lacked a term for what officieurs du génie did. At first, the corps du génie was more like an organization of masters and apprentices, its success dependent on learning from experience and informally passing on what was learned. Only during the 1700s did the French come to understand what officieurs du génie should be and how to get it by formal education. By the end of the 1700s, the French had a curriculum from which today’s engineering curriculum differs only in detail. They had also invented engineering (in a new sense), a discipline distinguished from others by its knowledge of modern physics, chemistry, and mathematics, skill in organizing large numbers of unskilled workers, comfort working within a large organization, willingness to document failure as well as success. and concern with utility rather than beauty. So far, this is the story of a military discipline. Only in the nineteenth century did engineering develop a civilian branch. That happened when civilian industry, beginning with the railroads, found a use for the special way engineers did things. 2  If the chain is broken, there is a new discipline, analogous to the old, however much it borrows. So, for example, the Renaissance did not revive ancient learning; it merely offered a modern version, much as a statue of a horse merely offers something like the horse that served as its model.

38

M. Davis

Engineering only became an occupation, that is, a way to earn a living, once “gentlemen” trained in engineering (the civilian equivalent of officieurs du génie) could earn their own living by their own work. Only late in the nineteenth century or early in the 20th did engineering become a profession, that is, a number of people in the same occupation organized to earn a living by openly serving a certain moral ideal in a morally permissible way beyond what law, market, morality, and public opinion would otherwise require.3 The history of a profession is the story of organizations, standards of competence, and standards of conduct, their change as well as their creation. For engineering, that story is in large part about engineering’s curriculum, its code of ethics as well as technical standards. So, for example, there are at least two reasons why, say, “software engineering” (despite the name) is not engineering (in the sense civil or mechanical engineering undoubtedly is). First, the curriculum of software engineering belongs to computer science, not engineering. Software engineers learn how to work primarily from computer scientists, not engineers. They learn different technical standards and belong to a different chain of students and teachers than engineers—even computer engineers—do. Second, the code of ethics for software engineers is distinct from the codes of ethics for engineers (strictly so-called) both in content and origin.4 What then is a code of ethics? Strictly speaking, a code is a systematic statement of rules of some sort, whether of laws (“the legal code”), standards (“the boiler code”), ciphers (Morse code), or the like—and, by analogy, something that can be represented as a systematic statement, such as “the genetic code” or an “unwritten code.” A code of ethics is a systematic statement of “ethics.” “Ethics” (so used) might be either: (a) a synonym for ordinary morality (the standards of conduct that should guide all reasonable persons); or (b) the name of certain morally permissible standards of conduct governing members of a group simply because they are members of that group (where a “group” is any collection of reasonable persons greater in number than one but less than everyone). There is, of course, a third common use of “ethics,” one philosophers favor. It names the attempt to understand morality, including ethics in its second sense, as a reasonable undertaking. When philosophers teach “Ethics,” it is a course in moral theory. However, when they teach “Business Ethics” or “Engineering Ethics,” it is still (in part at least) an attempt to understand a part of morality, the special standards of business or engineering (actual or proposed), as a reasonable undertaking. A “code of ethics” in the first sense of “ethics” would be a systematic statement of ordinary moral rules, principles, or ideals. Those who understand “ethics” in this way might as well, and often do, speak of “engineering morality” instead of 3  For a defense of this definition of profession, see Michael Davis, Profession, Code, and Ethics (Ashgate: Aldershot, England, 2002). 4  For more information, see Davis 1995, 2011, and compare Software Engineering Code of Ethics and Professional Practice (1998), with, for example, the 2007 Code of Ethics for Engineers of the National Society of Professional Engineers (2007).

3  Codes of Engineering Ethics: Recent Trends

39

“engineering ethics”—or “professional morality” instead of “professional ethics.” They must then explain why most professionals, not only most engineers, want to talk about the “ethics” of their profession, not its “morality.” In the second (special standards) sense, engineering ethics resembles law. Just as a law (or a legal system) applies only to a certain group of moral agents (and certain objects and nonmoral agents), those within its jurisdiction, so engineering ethics (in this sense) applies only to a group of moral agents, engineers. And just as law includes the interpretation, application, justification, and criticism of particular laws, so engineering ethics includes the interpretation, application, justification, and criticism of engineering’s special standards. The philosophical contribution to engineering ethics (in the special-standards sense of “ethics”) resembles legal philosophy’s contribution to law, more a sorting of facts, concepts, and arguments than an application of moral theory (“ethics” in our third sense). Philosophers as such can advise legislators, judges, or lawyers, but they cannot, as philosophers, settle what laws are. They can, nonetheless, settle what laws should be insofar as they can show a certain law, whether actual or proposed, is, or is not, unreasonable or immoral. Nonetheless, a code of ethics differs from a code of law in at least one way. A code of ethics is (ideally at least) a standard every member of the group in question (at her most reasonable) wants every other member of the group to follow, even if others following those standards would mean having to do the same. Law is a standard that applies to every member of the group in question whether everyone, indeed anyone, in the group wants it or not. A code of ethics is primarily enforced by conscience, good will, and reason; law as such, by force, the threat of force, or some other external incentive. A code of ethics may appear in engineering under another title, for example, “principles of professional responsibility,” “canons of ethics,” “rules of conduct,” or “ethical guidelines.” However titled, a code of ethics will belong to one of four categories: (1) professional (for example, the Code of Ethics of the American Society of Civil Engineers), a code applying to all, and only, “engineers”; (2) subprofessional (for example, the Asian Engineers’ Guidelines of Ethics), a code applying to some subset of engineers (“Asian engineers”), not engineers in general; (3) organizational (like the Code of Ethics of the American Institute of Aeronautics and Astronautics), a code applying only to “members” of a certain technical, scientific, or other society, whether engineers or not; or (4) institutional (such as the Computer Ethics Institute’s Ten Commandments of Computer Ethics), a code applying to anyone involved in a certain activity (using a computer), whether an engineer or not, whether a member of the enacting organization or not—(ASCE 2006, Fundamental Canon 1; Chinese Academy of Engineering et  al. 2004; AIAA 2013; Computer Ethics Institute 1992). The first formal code of engineering ethics (about 360 words) seems to have been adopted in 1887 by the Canadian Society of Civil Engineers (Engineering Institute of Canada 1887).5 Since this short code applied only to members of that association,

 Given the way it is printed (with the Society’s older name), it probably dates from before 1887.

5

40

M. Davis

not to all engineers, it is an organizational code. The first professional code of ethics for engineers seems to have been the code of ethics of the American Institute of Electrical Engineers (AIEE) adopted in 1912 (about 1100 words) (AIEE 1907, 1912).6 While many professions in the United States (law and medicine, most famously) make an express commitment to the profession’s code of ethics a formal requirement for admission to practice, engineering does not (except in certain countries, such as Canada, or for specific purposes, such as the licensing of Professional Engineers in some American states). Instead, the expectation of commitment for most engineers typically reveals itself only when an engineer is found to have violated the code. The defense, “I’m an engineer, but I didn’t promise to follow the code and therefore did nothing wrong,” is never acceptable. The profession answers, “You committed yourself to the code when you claimed to be an engineer rather than a chemist, technician, technical expert, or the like.”

3.3  Globalization The history of codes of engineering ethics outside English-speaking countries has yet to be written, but we now know some things that we did not even a few decades ago. Most important is that that history is much longer than we once supposed. The Norwegian Engineering Association adopted a code of engineering ethics in 1921 (about 320 words), which is the earliest non-English code in the Ethics Codes Collection (Norwegian Engineering Association 1921).7 We still know nothing about how that code came to be. We know more about how Chinese and Japanese engineers came to adopt their respective codes of ethics in the 1930s (Zhang and Davis 2018). Perhaps there are other early codes to be discovered. We may now divide the development of codes of engineering ethics outside English-speaking countries into three periods. The first is the period before World War II just sketched. The second period is from 1940 to 1995, and the third from then to the present. Like the first period, the second period of codes of engineering ethics outside of English-speaking countries has only a few widely scattered items, though more than in the first period. The first of these non-English-language codes seems to be the “Confession of the Engineers” adopted by the German Association of Engineers (VDI) in 1950 (VDI 1950). Though very short (about 200 words), it is a true professional code, applying to “engineers” as such. Just over 10 years later (1961), the Technical Chamber of Greece adopted the “Professional Code of Greek Licensed 6  Interestingly, the first draft of this code, dated 1907, was a sub-professional code. It applied to “electrical engineers”, not to engineers as such (as all successors have). 7  I say “about” because I am giving the word count of the translation into English. The word count in the original might differ by several percent from the word count in the translation because (a) one translation an differ the much from another (translation being an art) and (b) languages differing in what counts as a word (some languages having a single compound word where English has a phrase or vice versa.

3  Codes of Engineering Ethics: Recent Trends

41

Engineers” (about 900 words). Since holding an engineering degree is all Greece requires for an engineer to be “licensed,” and the code applies to “licensed engineers,” this code is (more or less) a true professional code. Five years later, the Engineering Society of Finland adopted a “Code of Honor” somewhat longer than the German “Confessions.” Though in content similar to codes of engineering ethics, there are at least four oddities. First, in form, the Finnish “Code of Honor” is simply an individual’s oath or promise (“I will, in all my acts and deeds, obey the rules of life contained in this code of honor”), not the promise of a “we”. The oath or promise is not a condition for joining the Engineering Society. Second, the oath or promise is not a commitment simply to follow certain special standards of a profession, organization, or institution but “rules of life.” Third, the code applies to architects as well as engineers, that is, to two professions rather than one. Fourth, the code seems to use the term “profession” in the sense of occupation rather than in the sense we have given “profession” here (Engineering Society of Finland 1966). It is unlike any other code of engineering ethics in the Ethics Codes Collection. No non-English-language code in the Collection dates from the 1970s. Then there is a small burst in the 1980s, all in Spanish-speaking countries: the Code of Professional Ethics adopted by Venezuela’s College of Engineers in 1980 (about 550 words); the Code of Ethics of the Mexican Union of Engineering Associations adopted in 1983 (about 770 words); and Puerto Rico’s Canons of Ethics of Engineers and Surveyors adopted in 1985 (almost 1000 words). The Venezuelan code is divided into three parts. The first part, the first 15 articles, is dated “27-06-57”; the second part, article 16, is dated “04-10-76”; and the third part, article 17, is dated “27-06-80”. The 1957 date for the first 15 articles seems to make the Venezuelan code the oldest code of engineering ethics in a Spanish-speaking country, much older than the Collection’s official 1980 date suggests. The two amendments seem to show that Venezuelan engineers paid attention to what was in their code, at least up to 1980. The 1976 amendment is the earliest attempt to protect the environment in any of the engineering codes in the Collection. Engineers are not to: “Intervene directly or indirectly in the destruction of natural resources, or omit the corresponding action to avoid the production of facts that contribute to environmental deterioration.”8 The Venezuelan code is also one of the rare codes to consist entirely of prohibitions.9 This burst ends with the 1990 statement of “Obligations of Members” of the Indian National Academy of Engineering, one of the shortest codes of ethics ever (50 words): As a Fellow of the Indian National Academy of Engineering, I shall follow the code of ethics, maintain integrity in research and publications, uphold the cause of Engineering and the dignity of the Academy, endeavour to be objective in judgement, and strive for the enrichment of human values and thoughts.10  My translation (using Google).  Its Article 17 is also unusual. It seeks to protect employers. 10  The files of the Codes Collection include two other (much longer) codes of ethics for Indian engineers, but neither of these is dated: Code of Ethics for Members of Indian Society of Engineers 8 9

42

M. Davis

The Ethics Codes Collection has no codes from outside English-speaking countries adopted during the rest of the 1990s—until 1999. Then there was another burst. There were two new codes in 1999: The Code of Ethics of Japan’s Society of Civil Engineers (about 800 words); and the Code of Ethics of the College of Engineers of Peru (over 4500 words, including disciplinary procedures) (Japan Society of Civil Engineers 1999; College of Engineers of Perú 1999). Seven more codes followed during the next 8 years. Three were national codes, one each from France (about 660 words), Germany (almost 1200 words), and Puerto Rico (about 3000 words). However, four were international codes, one each from: the European Council of Civil Engineers (about 550 words), the World Federation of Engineering Organizations (about 3000 words), the Pan-American Academy of Engineering (about 3000 words), and the European Federation of National Engineering Associations (about 500 words)—(European Council of Civil Engineers 2000; Engineers and Scientists of France 2001; World Federation of Engineering Organizations n.d.; Association of German Engineers 2002; Pan American Academy of Engineering 2006; FEANI 2006; Engineers and Surveyors of Puerto Rico 2009). There was also a joint declaration of ethics by the national academies of engineers of China, Japan, and Korea in 2004, the “Asian Declaration” (just under 400 words) (Chinese Academy of Engineering et al. 2004). The only codes in the Collection dated after 2004 are from Hong Kong (83 pages) and Chile (about 850 words) (Hong Kong Institution of Engineers 2011; Association of Engineers of Chile 2013). There are also two undated codes that may come from this period but are probably older: the Ethical Code of the Industrial Engineers from the Dominican Republic (about 3500 words) and the Code of Ethics of the Federal Association of Engineers and Architects of Costa Rica (about 550 words) (University of Santiago, Dominican Republic n.d.; Federal Association of Engineers and Architects of Costa Rica 1974). The Costa Rican code is another consisting entirely of prohibitions. This history of codes of engineering ethics is, of course, limited by what the Center for the Study of Ethics in the Professions at Illinois Institute of Technology has been able to collect so far. There has been no systematic search for codes not in English. Instead, the collection has grown as individuals have contributed codes that they happened on—and the Ethics Center’s librarian has confirmed and received permission to post. So, for example, I have omitted the following Chinese codes of engineering ethics because they are not yet in the Ethics Codes Collection: China National Association of Engineering Consultants (1999); China Engineering Cost Association (2002); Plant Consultant Engineers (2009); Survey and Design Engineering Professionals (2014); and Engineering Consultant Professionals (2015) (Zhang and Davis 2018, 121-122). Perhaps much of the history reported here will have to be revised as new codes come in—and are translated into English for the

(undated), about 280 words; and Code of Ethics for Members, Indian Institute of Chemical Engineers (undated), about 400 words. Both seem to belong to the 1980s or earlier.

3  Codes of Engineering Ethics: Recent Trends

43

convenience of those English-speaking scholars who cannot read the language in question.

3.4  Sustainability Since the early nineteenth century, engineers (or, rather, civilian engineers) have openly served the moral ideal of improving the material condition of humanity. Sustainability seems to be a modern aspect of that ideal. Future generations are an easily forgotten part of humanity. So, engineers are inclined to make an explicit connection between their longstanding concern for human welfare and their new concern for “sustainability.” For the American Society of Civil Engineers (ASCE), for example, sustainability “is [in part at least] the challenge of meeting human needs for natural resources, industrial products, energy, food, transportation, shelter, and effective waste management while conserving and protecting environmental quality and the natural resource base essential for future development,” (2006, Fundamental Canon 1 and endnote 3). For other engineers, sustainability may be an even broader concept. For example, the German Association of Engineers (VDI) declares: “The fundamental orientation in designing new technological solutions is to maintain today and for future generations, the options of acting in freedom and responsibility (VDI 2002).”11 Today’s “sustainability” clauses seem to belong to a long development going back at least to the first American code of engineering ethics that sought to protect the public. That was the 1913 code of the American Institute of Chemical Engineers. It forbids members to “engage in any occupation which is obviously contrary to law or public welfare (American Institute of Chemical Engineers 1913, 9th rule).” In 1924, the American Association of Engineers (AAE) adopted a somewhat stronger provision. The first rule of its new code was, “The Engineer should regard his duty to the public welfare as paramount to all other obligations (AAE 1924).” Though the AAE soon disappeared, someone must have remembered its code because almost identical language (including the word “paramount”) reappeared in 1957  in the code of ethics developed by the Engineers’ Council for Professional Development (ECPD), the association of engineering societies responsible for accrediting engineering programs in the United States: “He will regard his duty to the public welfare as paramount (Rule 9).” Some engineering societies adopted this code—with many of the rest, such as the National Society of Professional Engineers (NSPE), endorsing it while keeping their own (1957, Introduction). Though the word “paramount” did not appear in any major engineering code until 1957, many engineers seem to have shared a general sense that the public welfare has a special place among considerations that should guide them. For example, the first NSPE code of ethics in the Collection (1935), apparently only a proposal,

11

 The Association of German Engineers (VDI) does not use the word “sustainability”, however.

44

M. Davis

included the following rule: “The engineer shall at all times and under all conditions seek to promote the public welfare by safeguarding life, health and property.” (B. (1)). In subsequent versions, this language was weakened. In 1946, the corresponding provision read: “He will have due regard for the safety of life and health of public and employees who may be affected by the work for which he is responsible (NSPE 1946, Section 4).” A year later, the provision was weakened again (and buried near the end of the code): “He will make provisions for safety of life and health of employees and of the public who may be affected by the work for which he is responsible (NSPE 1947, para. 30).” By 1956, the public welfare clause had disappeared altogether. Then, in 1961, the NSPE returned to the 1935 language of paramountcy, though putting it under a weaker general provision (“proper regard”) that was also an early whistleblowing requirement: Section 2—The Engineer will have proper regard for the safety, health, and welfare of the public in the performance of his professional duties. If his engineering judgment is overruled by non-technical authority, he will clearly point out the consequences. He will notify the proper authority of any observed conditions which endanger public safety and health. a. He will regard his duty to the public welfare as paramount (NSPE 1961, 2.a).

The NSPE kept the 1961 language until 1981. Meanwhile, the Engineers’ Council for Professional Development (ECPD) adopted the language of paramountcy in the first Fundamental Canon of its 1974 code: “Engineers shall hold paramount the safety, health and welfare of the public in the performance of their professional duties.” The NSPE adopted this language 7 years later as part of a total revision of its code (NSPE 1981). Several other large engineering societies did the same. Then American codes of engineering ethics seem to have remained stable for the next decade. The first major code of engineering ethics in an English-speaking country to include a provision mentioning the environment seems to be the Institute of Electrical and Electronics Engineers (IEEE)’s code of 1990: [We, the members of the IEEE,… agree:] 1. to accept responsibility in making engineering decisions consistent with the safety, health, and welfare of the public, and to disclose promptly factors that might endanger the public or the environment.

Other engineering societies followed this change of emphasis over the next decade. The American Society of Civil Engineers (ASCE) seems to have been the next. In 1993, it added the following to its “Guidelines”: “Engineers should be committed to improving the environment to enhance the quality of life.” (Canon 1, f). Three years later, New Zealand’s Institute for Professional Engineers (NZIPE) adopted a much stronger provision: “Members shall be committed to the need for sustainable management of the planet’s resources and seek to minimise adverse environmental impacts of their engineering works or applications of technology for both present and future generations.” (1996). This NZIPE code also seems to be the first code of engineering ethics to mentioned sustainability and to refer to future generations. Its provision also remains one the strongest. Engineers Australia (formerly the Institution of Engineers, Australia) adopted a more elaborate (but weaker) set of provisions (under “Community”) in its 2000 Guidance to Members.

3  Codes of Engineering Ethics: Recent Trends

45

Members: 1. should work in conformity with accepted engineering and environmental standards and in a manner which does not jeopardise the public welfare, health or safety; 2. should endeavour at all times to maintain engineering services essential to public health and safety; 3. should have due regard to requirements for the health and safety of the workforce; 4. should give due weight to the need to achieve sustainable development and to conserve and restore the productive capacity of the earth…

Other engineering organizations soon followed, including: Japan’s Society of Civil Engineers (2000); the German Association of Engineering (2002); the academies of engineering for China, Japan, and Korea (2004, note 5); the World Convention of Engineers (2004); and the Pan American Academy of Engineering (2006).

3.5  Short Versus Long The debate concerning how long a code of engineering ethics should be might be quite old. Certainly, the early codes vary considerably in length. The first code we know of, that of the Canadian Society of Civil Engineers (which later became the Engineering Institute of Canada), dates from 1887; it is about 350 words (Engineering Institute of Canada, 1922). The third code we know of, the American Institute of Electrical Engineers (AIEE) code is 1100 (1907). The other codes we have from that period seem to cluster around one of these two poles: The 1911 code of ethics of the American Institute of Consulting Engineers is about 370 words (1911); the Code of Ethics of the American Institute of Chemical Engineers (1913), about 950; the Code of Ethics of the American Society of Mechanical Engineers (ASME), also adopted in 1913, about 1800 words; and the American Society of Civil Engineers (ASCE)’s 1914 Code, about 190 words.12 For American codes of engineering ethics, the space between 400 and 900 words is empty until the 1970s—as is the space above 3000. Codes of ethics can be much longer: for example, the first Canons of Ethics of the American Bar Association (1908) was almost 5000 words. (The numbers given for all these documents in paragraphs are rounded to 0 to make it easier to make the comparison of size easier to keep track of.) The number of words is not the only difference between these early short and long codes of engineering ethics. The short codes typically consist of a brief, uninformative preamble and a simple list of rules. The longest of these preambles reads, “It shall be considered unprofessional and inconsistent with honorable and dignified bearing for any member of the American Institute of Consulting Engineers”. The inspiration for these short codes seems to be the Ten Commandments. Indeed, the code of the American Institute of Consulting Engineers (AICE) actually has exactly  In the 1911 code, the title seems to be the date of publication in the Bulletin of the American Institute of Architects, not the date of adoption.

12

46

M. Davis

ten (numbered) rules, though the other two early short codes (the Canadian Society of Civil Engineers and the ASCE) have just six. In contrast, the early long codes typically have a slightly longer and more informative preamble, for example, “While the following principles express, generally, the engineer’s relations to client, employer, the public, and the engineering fraternity, it is not presumed that they define all of the engineer’s duties and obligations.” (AIEE, 1907). The long codes also typically divide their list of rules in some way. The American Institute of Electrical Engineers (AIEE) code collects its 22 numbered rules into five sections: A. General Principles, B. The Engineer’s Relations to Client or Employer, C.  Ownership of Engineering Records and Data; D.  The Engineer’s Relations to the Public; and E.  The Engineer’s Relations to the Engineering Fraternity. While the much longer code of the American Society of Mechanical Engineers (1913) is similarly sectioned, the code of the American Institute of Chemical Engineers (also 1913) is not. Instead, it is sectioned into four “articles”: I.  Purpose of the Code; II.  THE INSTITUTE EXPECTS OF ITS MEMBERS; III. (untitled but concerned with enforcement); and IV. Amendments. Article II has 14 numbered rules, with one, the tenth having seven (lettered) sub-­ rules (a-g). While we might suppose that the inspiration for engineering’s early long codes would be the code of ethics that the American Medical Association (AMA) adopted in 1847—or its new code, adopted in 1912, there are at least three reasons to think otherwise. First, both AMA codes are much longer than any of the early codes of engineering ethics. Second, neither AMA code is organized like either engineering code. Third, the content of the AMA codes differs quite a lot from any engineering code. For example, there is no engineering code that has a section dealing with “Patience, Delicacy, and Secrecy” or “Prognosis” (AMA 1847).13 Engineering’s long codes seem to have developed without much influence from the AMA. After this first burst of code writing, there appears to have been two decades of reflection. The only activity that the Ethics Codes Collection documents between 1915 and 1935 is at the American Association of Engineers (AAE). The AAE’s 1922 code is just another short code, about 350 words; its 1924 code, though innovative in content, is in form just another long code, about 1600 words. Then, as if regretting the long code, the AAE adopted one more code, a “Vow of Service” (1927), about 180 words. It ends with the commitment “[to place] the Public Welfare above all other consideration.”14

 So what was the inspiration for the long codes, if there was any? Another question for historians.  Obligations of Members, Indian National Academy of Engineering (1990), http://ethics.iit.edu/ ecodes/node/6142). The files of the Codes Collection include two other (much longer) codes of ethics for Indian engineers, but neither of these are dated: Code of Ethics for Members of Indian Society of Engineers (http://ethics.iit.edu/ecodes/node/6144); and Code of Ethics for Members, Indian Institute of Chemical Engineers (http://ethics.iit.edu/ecodes/node/6140). Both seem to belong to the 1980s or earlier.

13 14

3  Codes of Engineering Ethics: Recent Trends

47

We can easily imagine the debate between proponents of short codes and long.15 On the side of short codes is the hope that engineers will carry the code with them in memory just as Americans carry the Pledge of Allegiance. They could then access the code even when working in the field. The Code of Ethics of the Indian Academy of Engineering, only 50 words long, seems to be the logical consummation of this conception. In contrast, on the side of long codes is the hope that the code will embody all the rules that engineers can agree on concerning how they want members of their profession to conduct themselves (a hope that the first long code tried to temper). Like a dictionary or code of laws, the long code of ethics could be a handy reference, supplying what memory does not. The proponents of long codes may imagine the engineer back in the office with the code on a shelf nearby, ready to help resolve any ethical issue that may arise. The logical consummation of this conception might be a code like that of Hong Kong (2011), a document of 83 pages. What compromise is possible between these two conceptions of a code? For two decades after the first long codes were adopted, the answer to that question seemed to be: None. Then began another period of innovation. According to the Ethics Codes Collection, the first of these innovations appeared in the first code of ethics developed by the National Society of Professional Engineers (NSPE) (1935). That code has three sections: Introduction, Ethics, and Practice. Both Ethics and Practice consist of general principles and more specific rules. The rules seem to be applications of the principle. Most of the principles have titles similar to the American Institute of Electrical Engineers (AIEE)’s section titles, for example, Relationships with the Public (B.1). But one (E.1) has none, perhaps because it is the sole principle under Practice. What the distinction between “Ethics” and “Practice” is, is not clear. Nor is it clear why the lettered principles are also numbered. Though innovative in form (as well as content), the NSPE code of 1935 is just another long code. It may also only be a draft. The version in the Ethics Codes Collection is titled “Code for Society: Ethics and Practice Suggestions.” Two short paragraphs follow that title. The first indicates that the code is a set of “proposals, pertinent to ethics and practice…submitted for an expression of opinion.” The second paragraph asks the reader to “Kindly urge your Chapter and State Society to appoint committees to report (NSPE n.d.).”16 We can imagine the drafters of this document to have had at least two purposes in mind. The first was to satisfy the proponents of short codes with the five principles that might be removed from the rest of the code and carried in memory, the rules then being (more or less) “derivable” from the principles. The second purpose the drafters might have had in mind was to satisfy proponents of long codes with a long code (about 1000 words).

 Of course, it would be better if a historian could find the relevant documents and reconstruct the debate. Imagination is often a poor cousin to reality. 16  Compare NSPE, “History of the NSPE Code of Ethics for Engineers”, https://www.nspe.org/ resources/ethics/code-ethics/history-code-ethics-engineers 15

48

M. Davis

The NSPE’s codes of 1946 (another proposal) and 1947 (adopted) are actually more like the AIEE’s 1912 code than their common predecessor. The principles are gone; only the rules remain, divided into sections by title (NSPE 1946, 1947). The next innovation in the NSPE code did not come until 1957. These “Canons” include 62 Rules, for example, “Rule 1. He will be guided in all his relations by the highest standards.” The Rules are collected under 28 Sections, each headed by a principle, such as “Section 1. The engineer will co-operate in extending the effectiveness of the engineering profession by interchanging information and experience with other engineers and students and by contributing to the work of engineer societies, schools, and the scientific and engineering press.” The Sections are collected under the following titles: Professional Life; Relations with the Public; Relations with Clients and Employers; Relations with Engineers; and Miscellaneous. The Sections come from the 1947 code of the Engineers’ Council for Professional Development (ECPD) that was supposed to represent the ethical consensus of all the engineering societies then belonging to the ECPD (1947). This consensus code differed little in length or format from the AIEE’s code of 1914. The chief innovation of the NSPE’s 1957 code, apart from its 3000-word length (a result of combining two long codes), is organizing the Rules into Sections with the explicit purpose that “The Rules should help everyone to understand the Canons better and they are presented immediately following the section of the Canons to which they refer.” So far, both the arrangement of the code and its title (“Canons”) suggest that the 1908 code of the American Bar Association (ABA), still in force though occasionally amended, might have had some influence. But preceding these 3000 words are two paragraphs, one titled “Forward” and the other “Canons of Ethics,” a sort of double preamble. The 1957 code has another thousand words (all under another “Canons of Ethics” title) preceding this double preamble. Six of its paragraphs provide yet another preamble, explaining the NSPE’s relationship to the code. Next, there is an “Introduction” of ten paragraphs explaining the booklet in which the new code apparently was to be printed. Finally, there is the “Engineer’s Creed” (similar to the 1927 “Vow of Service” of the American Association of Engineers), about 87 words, beginning, “As a Professional Engineer, I dedicate my professional knowledge and skill to the advancement and betterment of human welfare (NSPE 1957).” The 1957 code has the look of a draft rather than a final document. So, it is not surprising that the NSPE published yet another code 4 years later. There were at least five critical changes. First, most of the preambles were removed (saving about 1000 words). Second, the word count of the code itself dropped from about 3000 to about 2000. Third, the number of Sections dropped from 28 to 15. Fourth, the subsections were lettered instead of numbered (and no longer called “Rules”). Fifth, the new Preamble seemed designed to incorporate the content of the Engineer’s Creed (as well as help with interpreting the Code): The Engineer, to uphold and advance the honor and dignity of the engineering profession and in keeping with high standards of ethical conduct:

3  Codes of Engineering Ethics: Recent Trends

49

- Will be honest and impartial, and will serve with devotion his employer, his clients, and the public; - Will strive to increase the competence and prestige of the engineering profession; - Will use his knowledge and skill for the advancement of human welfare (ECPD 1961).17

Any influence the ABA Canons of Ethics might have had on the NSPE code seemed to have disappeared. The next innovation came more than a decade later (1974), the work of the ECPD rather than the NSPE.  This was a short code (192 words), divided into four Fundamental Principles (a Roman-numerated version of the NSPE’s 1961 Preamble) and seven Fundamental Canons (resembling in content the NSPE’s more numerous 1961 “Sections”) (ECPD 1974a, b). Three years later, the ECPD added: “Guidelines for Use with the Fundamental Canons of Ethics” (ECPD 1977). The Guidelines were, in fact, a long code (almost 2700 words) consisting of the Fundamental Canons, providing general principles much as the previous Sections did), and rules offering applications of the general principles. The Fundamental Canons kept their numbers; the subtending rules were lettered (a, b, c, and so on)—with sub-rules lettered and numbered (a.1, a.2, and so on). The title of this document, especially the reference to the “Fundamental Canons,” suggested a connection with the unmentioned 1974 code and a question about the status of the missing four “Fundamental Principles” of the 1974 code. The ECPD has retained the 1977 code unchanged until today while changing the organization’s name first to “Accreditation Board of Engineering Technology” and then to “ABET.” ABET seems to have left the writing of codes of engineering ethics to the engineering societies (ABET 2015).18 The societies have responded. The Institute of Electrical and Electronics Engineers (IEEE), the successor of the AIEE, was the first. It adopted a new code of ethics in 1979. An unusually long short code (about 550 words), it applied only to “members” (rather than to “engineers,” as the AIEE code had). It was divided into a Preamble and four untitled Articles (IEEE 1979). It did not last. The IEEE replaced it only 11 years later with a code of less than half its length (about 250 words). That code has only a brief preamble and ten numbered rules (IEEE 1990). It has not only survived more or less unchanged until today but seems to have influenced other codes, most notably that of the American Institute of Chemical Engineers (AIChE). The AIChE’s 1995 code consists of a Preamble and nine numbered rules, about 235 words. Though the format is similar to the IEEE’s, both the AIChE’s ordering of rules and wording is different. So, for example, the AIChE’s first rule is the familiar “[members shall] 1. Hold paramount the safety, health, and welfare of the public in performance of their professional duties.” However, the corresponding rule in the IEEE’s code is “[members are] to accept responsibility in making engineering decisions consistent with the safety, health, and welfare of the public, and to disclose promptly factors that might  See n32 for ECPD (1961).  Recently, ABET did adopt a short code of ethics for its volunteers and staff consisting of ten rules (about 300 words, including the preamble). ABET Rules of Procedure (2015), http://admin.ethicscodescollection.org/detail/cc26de46-7831-4c8b-8558-65369dcaa21d

17 18

50

M. Davis

endanger the public or the environment” (AIChE 1995). The AIChE last amended its code in 2003, adding two new rules (and deleting the numbering). One rule is unusual: “Never tolerate harassment (AIChE 2003).” Meanwhile, the NSPE continued experimenting with its long code. Its 1981version consists of a “Preamble” (similar in content to the Fundamental Principles), five “Fundamental Canons,” five “Rules of Practice” (with many sub-rules), 11 “Professional Obligations” (with many sub-obligations), a discussion of the relation of anti-trust law to the code, a “Statement by NSPE Executive Committee” concerning a certain anti-trust decision of the US Supreme Court, and a final note emphasizing that the code applies to “real persons” (engineers), not to corporate persons as such (such as engineering firms) (NSPE 1981). The discussion, Statement, and note add about 550 words to the core of the code, already about 2000 words. After 1981, changes in the NSPE code have only been in detail. The largest of these are, first, the 1996 addition of a sixth Fundamental Canon (“[Engineers shall:…Conduct themselves honorably, responsibly, ethically and lawfully so as to enhance the honor, reputation, and usefulness of the profession (NSPE 1996)” and, second, the 2007 addition of a new sub-rule III.2.d concerned with sustainable development (NSPE 2006). The NSPE seems to be as happy with its long code as the IEEE is with its short code.

3.6  Conclusion This history of codes of engineering ethics seems to have two themes, one concerned with content and one concerned with the format. The theme of content is one of improvement, a slow growth in engineers’ understanding of what they are committed to or at least a slow increase in the written commitments they are willing to make. We focused only on the movement from concern for “the public welfare” to concern as well for the “environment,” “future generations,” and “sustainability.” This movement was not limited to the United States. Others joined—and sometimes led the way. We might have focused on other movements, for example, the explicit regulation of conflict of interest or avoiding bribery. The second theme (form) is not so much one of progress as of “change” (with the old proverb in the background, “The more things change, the more they remain the same”). The AIEE was the first engineering society to adopt a long code (long by engineering standards). Its successor, the IEEE, seems to have moved in the opposite direction, first adopting a long short code (550 words) and then quickly switching to a much shorter short code—one it has only amended in small ways since its adoption in 1990. Meanwhile, the ASCE has gone from its first short code (1914) to its present long code (2006). The only advance in form over the last century seems to be the organization of long codes into segments (for example, four “Fundamental Principles,” seven “Fundamental Canons,” and much longer “Guidelines”) (ASCE 2006). In the US at least, codes much over 2000 words seem not to last, nor do codes that enter the range of 400-900 words.

3  Codes of Engineering Ethics: Recent Trends

51

A code doubtless expresses the values of the members of the organization adopting it. However, that expression is not a precise mirroring. The organization that adopts a code is itself a complex of ordinary members, committees that must do the drafting, and officers who carry more weight than ordinary members. A code, especially an enduring code, is generally the work of many hands. It necessarily embodies many compromises, perhaps enough of them to make it everyone’s second, third, or even fourth best, good enough to pass but no one’s darling.

References Accreditation Board for Engineering and Technology. 2015. ABET Rules of Procedure. http:// admin.ethicscodescollection.org/detail/cc26de46-­7831-­4c8b-­8558-­65369dcaa21d. Accessed 16 Dec 2019. American Association of Engineers. 1922. Code of Ethics. http://ethicscodescollection.org/ detail/4a1abd41-­5749-­4ed4-­a3b9-­d6e110062d99; AAE, note 24. Accessed 9 Dec 2019. ———. 1924. Specific Principles of Good Professional Conduct. http://ethicscodescollection.org/ detail/4c3a93e8-­4e52-­44e0-­8f57-­e35e2854e72b. Accessed 9 Dec 2019. ———. 1927. Engineer’s Vow of Service. http://ethicscodescollection.org/detail/9eecb3e9-­e909-­4 210-­8dd6-­4c61ccb3ddb6. Accessed 9 Dec 2019. American Bar Association. 1980. Canons of Ethics for Lawyers. http://ethics.iit.edu/codes/ ABA%201980.pdf. Accessed 17 Nov 2019. American Institute of Aeronautics and Astronautics (AIAA). 2013. Code of Ethics. http://ethicscodescollection.org/detail/012ecd5d-­8bdc-­445a-­91c6-­2953be52f1f0. Accessed 16 Dec 2019. American Institute of Chemical Engineers. 1913. Code of Ethics. http://ethicscodescollection.org/ detail/28d67236-­9671-­45d6-­86c4-­0246ac735b19. 9th Rule. Accessed 6 Dec 2019. ———. 1995. Code of Ethics. http://ethicscodescollection.org/detail/ceb4a5b4-­f00c-­4263-­b89a-­ c835dff7c7e0. Accessed 6 Dec 2019. ———. 2003. Code of Ethics. http://ethicscodescollection.org/detail/4f07b734-­236d-­415b-­8c1c-­4 12e36438023. Accessed 6 Dec 2019. American Institute of Consulting Engineers. 1911. Code of Ethics Adopted by the American Institute of Consulting Engineers. http://ethicscodescollection.org/detail/678cae51-­4d8a-­4309-­aa 91-­848042050b29. Accessed 6 Dec 2019. American Institute of Electrical Engineers. 1912. Code of Principles of Professional Conduct. http://ethicscodescollection.org/detail/760abacd-­9d6f-­42df-­97d4-­aa76b30d99d1. Accessed 6 Dec 2019. American Institute of Electrical Engineers Committee on Code of Ethics. 1907. Proposed Code of Ethics. http://ethicscodescollection.org/detail/1e8b2bf6-­6ce4-­4b69-­89eb-­d1603d2cce45. Accessed 6 Dec 2019. American Medical Association. 1847. Code of Medical Ethics. http://ethicscodescollection.org/ detail/f557cc35-­dac7-­4fa3-­9086-­7dcc19e81fb0. Accessed 10 Dec 2019. ———. 1912. Principles of Medical Ethics. http://ethicscodescollection.org/detail/4cfdb199-­8 201-­47e9-­97d0-­048db448ed39. Accessed 13 Dec 2019. American Society of Civil Engineers (ASCE). 1914. Code of Ethics. http://ethicscodescollection. org/detail/ea157bae-­b8f8-­4acd-­a8a3-­d814d4441ad3. Accessed 10 Dec 2019. ———. 1993. Code of Ethics. http://ethicscodescollection.org/detail/86d0aa38-­2356-­4dcc-­a 7d5-­556064826a5a. Canon 1, f. Accessed 10 Dec 2019. ———. 1996. Code of Ethics of the American Society of Civil Engineers. http://ethicscodescollection.org/detail/a001ddc8-­3788-­4d11-­af23-­42db1b9bb0aa. Accessed 10 Dec 2019.

52

M. Davis

———. 2006. Code of Ethics. http://ethicscodescollection.org/detail/a001ddc8-­3788-­4d11-­ af23-­42db1b9bb0aa. Accessed 13 Dec 2019. American Society of Mechanical Engineers (ASME). 1913. A Proposed Code of Ethics for Engineers. http://ethicscodescollection.org/detail/fdae9903-­5bff-­4706-­82ce-­88c79245de7c. Accessed 6 Dec 2019. Association of Computing Machinery (ACM). 1998. Software Engineering Code of Ethics and Professional Practice. http://ethicscodescollection.org/detail/12390670-­d0cc-­4323-­8628-­365 cf1a4cc83. Accessed 6 Dec 2019. Association of Engineers of Chile. 2013. Code of Ethics. http://ethicscodescollection.org/detail/ ea964fb0-­608f-­49ef-­a2de-­961c2631fa53. Accessed 13 Dec 2019. Association of German Engineers (VDI). 1950. Engineer’s Confession http://ethicscodescollection.org/detail/51576daf-­4c13-­44ee-­a973-­4658e28a0af7. Accessed 6 Dec 2019. ———. 2002. Ethical Principles of the Engineering Profession, http://ethicscodescollection.org/ detail/20695b8b-­8c06-­4ea1-­b89e-­f0c159bc4b03. Accessed 6 Dec 2019. Canadian Society of Civil Engineers. 1922. The Code of Ethics of the Engineering Institute of Canada, Incorporated 1887 as the Canadian Society of Civil Engineers. The Annuals of the American Academy of Political and Social Science 101: 274. http://jstor.org/stable/1014627. Accessed 6 Dec 2019. Chinese Academy of Engineering, Engineering Academy of Japan, National Academy of Engineering of Korea. 2004. [Asian] Declaration on Engineering Ethics. http://ethicscodescollection.org/detail/f3f331ed-­be2e-­44fb-­9297-­a3135774d77b. Accessed 3 Dec 2019. College of Engineers and Surveyors of Puerto Rico. 1985. Code of Ethics of Engineers and Surveyors. http://ethicscodescollection.org/detail/9316dab8-­6504-­4663-­beec-­5e0d49a8098a. Accessed 16 Dec 2019. ———. 2009. Code of Ethics of Engineers and Surveyors. http://ethicscodescollection.org/detail/ a7e9032c-­417e-­4477-­a965-­d5032a5adf30. Accessed 16 Dec 2019. College of Engineers of Perú, 1999. Code of Ethics. http://ethicscodescollection.org/detail/ fa87435d-­93fe-­4410-­9333-­c6fbfe937d4c. Accessed 3 Dec 2019. College of Engineers of Venezuela. 1980. Code of Ethics. http://ethicscodescollection.org/detail/ ffa05a4e-­ed03-­4871-­bd57-­2e786befa556). Accessed 16 Dec 2019. Computer Ethics Institute. 1992. Ten Commandments of Computer Ethics. http://ethicscodescollection.org/detail/411d6362-­5ab5-­438b-­82de-­7a3575412f40. Accessed 12 Dec 2019. Davis, Michael. 1995. Are ‘Software Engineers’ Engineers? Philosophy and the History of Science 4 (October): 1–24. ———. 2002. Profession, Code, and Ethics. Aldershot: Ashgate. ———. 2011. Will Software Engineering Ever Be Engineering? Communications of the ACM 54 (November): 32–34. Engineering Institute of Canada. 1887. The Code of Ethics. http://ethicscodescollection.org/ detail/18269415-­afe6-­4dfe-­8b3e-­81d5d215f61e. Accessed 4 Dec 2019. Engineering Society of Finland. 1966. Code of Ethics. http://ethics.iit.edu/ecodes/node/6126. Accessed 16 Dec 2019. Engineers and Scientist of France (IESF). 2001. Engineering Ethics Charter. http://ethicscodescollection.org/detail/05f23294-­5b62-­45a3-­a61a-­783d55216880. Accessed 12 Dec 2019. Engineers Australia. 2000. Code of Ethics. http://ethicscodescollection.org/detail/7db72f3b-­1f87-­ 4e30-­8460-­f1accafca8fa. Accessed 16 Dec 2019. Engineers Council for Professional Development. 1947. Canons of Ethics for Engineers. http://ethicscodescollection.org/detail/40290245-­ba12-­4ae9-­ac8a-­8ef30b0c68d9. Accessed 4 Dec 2019. ———. 1957. Code of Ethics for Engineers. http://ethicscodescollection.org/detail/96a18f7e-­70 5c-­4b31-­9e57-­f574f421e1ce. Accessed 4 Dec 2019. ———. 1974a. Code of Ethics of Engineers. http://ethicscodescollection.org/detail/9f2f6549-­08 7f-­415e-­937d-­72605a1b886e. Accessed 4 Dec 2019. ———. 1974b. Code of Ethics of Engineers. http://ethicscodescollection.org/detail/9f2f6549-­08 7f-­415e-­937d-­72605a1b886e/node/6401. Accessed 4 Dec 2019.

3  Codes of Engineering Ethics: Recent Trends

53

———. 1977. Suggested Guidelines for Use with the Fundamental Canons of Ethics. http://ethicscodescollection.org/detail/84b93d24-­260d-­4446-­a9a3-­2d274417f964. Accessed 4 Dec 2019. European Council of Civil Engineers. 2000. Code of Professional Conduct of the European Council of Civil Engineers. http://ethicscodescollection.org/detail/8e757be4-­1798-­42b7-­93 3f-­1ab2091774fe. Accessed 16 Dec 2019. Federal Association of Engineers and Architects of Costa Rica. 1974. Code of Ethics. http://ethicscodescollection.org/detail/56428bd6-­0421-­4f60-­8537-­94ec70e50aa9. Accessed 4 Dec 2019. Hong Kong Institution of Engineers. 2011. Ethics in Practice: A Practical Guide for Professional Engineers. http://ethicscodescollection.org/detail/905134a3-­88d0-­40cc-­873f-­1108082e061b. Accessed 16 Dec 2019. Indian Institute of Chemical Engineers. n.d. Code of Ethics for Members, Indian Institute of Chemical Engineers. http://ethicscodescollection.org/detail/93284c38-­b652-­4079-­9389-­8e8 87a8446dc. Accessed 3 Dec 2019. Indian National Academy of Engineering. 1990. Obligations of Members, Indian National. http://ethicscodescollection.org/detail/f46990a6-­8fee-­4994-­8f8e-­c58c2203cf39. Accessed 16 Dec 2019. Indian Society of Engineers. n.d. Code of Ethics for Members of the Indian Society of Engineers. http://ethicscodescollection.org/detail/61673f07-­9d00-­456e-­aa0f-­0ffbbb29a95b. Accessed 3 Dec 2019. Institute of Electrical and Electronics Engineers. 1979. Code of Ethics. http://ethicscodescollection.org/detail/8f3b6e4a-­8ac8-­49a6-­a2f6-­739f72eb9ea3. Accessed 3 Dec 2019. ———. 1990. Code of Ethics. http://ethicscodescollection.org/detail/5030eb1a-­ff9d-­4124-­813f-­7 3f56558052b. Accessed 3 Dec 2019. Institute of Professional Engineers - New Zealand. 1996. Code of Ethics. http://ethicscodescollection.org/detail/4f35a926-­93b0-­4c7d-­b97b-­d517c6134185. Accessed 4 Dec 2019. Japan Society of Civil Engineers. 1999. Code of Ethics. http://ethicscodescollection.org/ detail/6142cef3-­7591-­4372-­ba85-­be0a1e5e0160. Accessed 13 Dec 2019. ———. 2000. Sendai Declaration 2000 on Infrastructure Development and Civil Engineering Technology. http://ethicscodescollection.org/detail/a12a7fd9-­efc4-­486c-­800b-­d92105a47d55. Accessed 11 Dec 2019. Mexican Union of Associations of Engineers. 1983. UMAI Code of Ethics. http://ethicscodescollection.org/detail/3bffa783-­90e3-­444b-­885e-­0ed205994bbd. Accessed 16 Dec 2019. National Engineering Associations (FEANI). 2006. Position Paper on Code of Conduct: Ethics and Conduct of Professional Engineers. http://ethicscodescollection.org/detail/b82e2a84-­a 4a7-­4be5-­bd62-­f4a0e834c865. Accessed 13 Dec 2019. National Society of Professional Engineers (NSPE). 1935. Code of Ethics for Engineers. http:// ethicscodescollection.org/detail/de391e7f-­3806-­4e6f-­a311-­60d32e676267. Accessed 13 Dec 2019. ———. 1946. Canons of Ethics for Engineers. http://ethicscodescollection.org/ detail/4218bdd3-­2407-­44c0-­b697-­f9d121136416. Accessed 13 Dec 2019. ———. 1947. Canons of Ethics for Engineers. http://ethicscodescollection.org/detail/40290245-­ ba12-­4ae9-­ac8a-­8ef30b0c68d9. Accessed 13 Dec 2019. ———. 1957. Canons of Ethics for Engineers. http://ethicscodescollection.org/detail/96a18f7e-­7 05c-­4b31-­9e57-­f574f421e1ce. Accessed 13 Dec 2019. ———. 1961. Code of Ethics for Engineers. http://ethicscodescollection.org/detail/8c58b707-­ fede-­40fb-­99ae-­ea24dc901dde. Accessed 13 Dec 2019. ———. 1981. Code of Ethics for Engineers. http://ethicscodescollection.org/detail/a242c1b2-­8b 78-­45ed-­87d5-­680b5bf9ceaa. Accessed 13 Dec 2019. ———. 1996. Code of Ethics for Engineers. http://ethicscodescollection.org/detail/1ab3cf15-­ f577-­4668-­8828-­f8d826ad6888. Accessed 13 Dec 2019. ———. 2006. Code of Ethics for Engineers. http://ethicscodescollection.org/ detail/21fb6192-­6ea4-­4b5e-­ba6e-­d0db4c7a2ba5. Accessed 13 Dec 2019.

54

M. Davis

———. 2007. Code of Ethics for Engineers of the National Society of Professional Engineers. http://ethicscodescollection.org/detail/21fb6192-­6ea4-­4b5e-­ba6e-­d0db4c7a2ba5. Accessed 13 Dec 2019. Norwegian Engineering Association. 1921. Code of Ethics. http://ethicscodescollection.org/detail/ ac0d7c31-­eea1-­4e5e-­9883-­8f443e9feed1. Accessed 13 Dec 2019. Pan American Academy of Engineering. 2006. Code of Ethics. http://ethicscodescollection.org/ detail/3b342e80-­8afe-­4d77-­9fb6-­add79b4e2454. Accessed 13 Dec 2019. Technical Chamber of Greece. 1961. Professional Code of Greek Licensed Engineers. http://ethicscodescollection.org/detail/9e9e9771-­e392-­4094-­ba4f-­4eaf1034a9c2. Accessed 16 Dec 2019. United Nations Education, Scientific and Cultural Organization (UNESCO). 2004. Shanghai Declaration on Engineering and the Sustainable Future. World Engineers Convention, Shanghai, note 5. http://ethicscodescollection.org/detail/9edf734d-­686c-­4c2f-­a314-­ f678f259a40e. Accessed 12 Dec 2019. University of Santiago, Dominican Republic n.d. Código Ético del Ingeniero Industrial. http://ethicscodescollection.org/detail/e6fabdcf-­5ff1-­41d8-­8c1b-­f16ad25b827a. Accessed 6 Dec 2019. World Federation of Engineering Organizations. n.d. Potentially 2001. Code of Ethics. http://ethicscodescollection.org/detail/0f845d9c-­ed61-­4fb0-­8894-­69c190837002. Accessed 16 Dec 2019. Zhang, Hengli, and Michael Davis. 2018. Engineering Ethics in China: A Century of Discussion, Organization, and Codes. Business and Professional Ethics Journal 37: 105–135. https://doi. org/10.5840/bpej201821967. Accessed 16 Dec 2019. Michael Davis  Senior Fellow, Center for the Study of Ethics in the Professions and Emeritus Professor of Philosophy, Illinois Institute of Technology, USA.  He has published 16 books— including Engineering as a Global Profession (Rowman and Littlefield 2021). He has also published nearly 250 articles and chapters, including the recent: “The Legality of the Nuremberg Trials: A Brief Lockean Memoir”, International Journal of Applied Philosophy (2018); “Temporal Limits on What Engineers Can Plan”, Science and Engineering Ethics (2019); and “Professionalism among Chinese Engineers: An Empirical Study” [with Lina Wei and Hangqing Cong], Science and Engineering Ethics (2019).

Chapter 4

Informed Consent in Digital Data Management Elisabeth Hildt and Kelly Laas

Abstract  This article discusses the role of informed consent, a well-known concept and standard established in the field of medicine, in ethics codes relating to digital data management. It analyzes the significance allotted to informed consent and informed consent-related principles in ethics codes, policies, and guidelines by presenting the results of a study focused on 31 ethics codes, policies, and guidelines held as part of the Ethics Codes Collection. The analysis reveals that up to now, there is a limited number of codes of ethics, policies, and guidelines on digital data management. Informed consent often is a central component in these codes and guidelines. While there undoubtedly are significant similarities between informed consent in medicine and digital data management, in ethics codes and guidelines, informed consent-related standards in some fields such as marketing are weaker and less strict. The article concludes that informed consent is an essential standard in digital data management that can help effectively shape future practices in the field. However, a more detailed reflection on the specific content and role of informed consent and informed consent-related standards in the various areas of digital data management is needed to avoid the weakening and dilution of standards in contexts where there are no clear legal regulations. Keywords  Informed consent · Code of ethics · Guidelines · Personally identifiable information · Big data · Surveillance · Privacy

4.1  Introduction Digital data, discrete information signals produced by machine language systems that represent other kinds of data, can be copied indefinitely and spread easily. Digital technologies allow many ways to create, store, and replicate data, extract E. Hildt (*) · K. Laas Illinois Institute of Technology, Chicago, IL, USA e-mail: [email protected]; [email protected] © Springer Nature Switzerland AG 2022 K. Laas et al. (eds.), Codes of Ethics and Ethical Guidelines, The International Library of Ethics, Law and Technology 23, https://doi.org/10.1007/978-3-030-86201-5_4

55

56

E. Hildt and K. Laas

information from data sets, and transform it for future use. Digital data allows new ways for individuals and organizations to interact with one another (National Academy of Sciences 2009; Clark et al. 2015). Digital data arises from a variety of contexts and is becoming an increasingly valuable commodity to be collected, stored, shared, and sold. In many instances, users of apps and social media provide personal information to companies and receive services in return (van Dijik 2014). In research, researchers are encouraged to collect and deposit data in digital archives for secondary use or are obligated by funding agencies to allow for greater transparency in research (for example, National Institutes of Health 2004; National Science Foundation 2017). In business, the collection of data through sensor technologies, and the widespread use of the internet in daily life, have increased the amount of information available to companies and the different ways this information can be collected and used (Institute for Business Ethics 2016). In this contribution, we are primarily interested in digital data that provide information about individuals. Examples include medical and health-related data, data resulting from online research, data generated through social media, smartphones or fitness devices, geospatial data, or data created from online purchases. Of particular concern in this context is personally identifiable information, i.e., information that either alone or in conjunction with other information can be used to identify, trace, or contact an individual person. Examples include name; personal identification number such as passport number, social security number, financial account number or credit card number; address information (street address, email address); asset information such as Internet Protocol (IP) or Media Access Control (MAC) address or other host-specific identifiers; personal characteristics including biometric data; information about an individual that is linked or can be linked to one of the above, such as date of birth, activities, geographical indicators, employment information, medical information, education information and financial information (National Institute of Standards and Technology 2010). Whenever personally identifiable information is collected, stored, used, or deleted, issues concerning privacy, confidentiality, ownership, informed consent and data security may arise (Moor 1997; Clark et al. 2015). Though ethical issues related to the use of personally identifiable data are nothing new, this data’s digital nature does raise these questions in new ways. A recent prominent case is the political data firm Cambridge Analytica. It improperly collected the private information of more than 87 million Facebook users without their knowledge and sold psychological profiles of American voters to a political consulting firm connected to Donald Trump during the 2016 election (Cellan-Jones 2018; Rosenberg and Frenkel 2018; Kang and Frenkel 2018). Other recent cases include WhatsApp sharing user account information with Facebook (Denham 2016), or Google scanning the content of Gmail users’ email messages for marketing purposes (Statt 2017). In the research community, a much-discussed case arose in 2008 when a group of researchers officially released the de-identified profile data collected from the Facebook accounts of a cohort of 1700 college students from a U.S.  University

4  Informed Consent in Digital Data Management

57

(Lewis et al. 2008). However, it proved easy to identify the university, and the inclusion of data elements such as students’ majors, nationalities, and extracurricular activities made it likely that individual students could be re-identified (Zimmer 2010). There are other more recent examples. A study published in June 2014 manipulated the News Feeds of almost 700,000 Facebook users without informing them of their being involved in a research study (Kramer et al. 2014; Kleinsman and Buckley 2015). Moreover, a controversial face recognition study using facial images uploaded on a dating site spurred discussion as to whether the researchers were entitled to use the images without the consent of the dating site users (Leetaru 2017). Cases like these have helped raise awareness of ethical issues in digital data management in many different fields, including research involving online data collection and Big Data. Institutions engaged in digital data management have become aware of the need to address these issues and to set priorities and specify rules in this area of practice. Becoming aware of this need, some have developed codes of ethics, policies, or guidelines shaping data management practices. While this may in part be a reaction to a specific problem that occurred in the past, many of these standards may also serve as proactive goals and help to shape the future of digital data management (Metcalf 2014). The development of policies, guidelines, and ethics codes relating to digital data management can be seen in various contexts. It includes ongoing revisions of the collection of major research ethics regulations known as the Common Rule (Metcalf and Crawford 2016; Vitak et al. 2016). The regulations can draw from well-established standards in related fields, as concern over the proper handling of data is widespread through the ethical guidelines of medicine, life sciences, and social sciences, among others.

4.2  Role of Ethics Codes and Guidelines in Process Codes of ethics and ethical guidelines reflect morally permissible standards of conduct that members of a group make binding upon themselves and ideally should change as the group faces new ethical issues or questions. Codes of ethics also call upon members of that group to go beyond the standard dictates of the law and ordinary morality (Davis 1991, 2015). At their best, codes of ethics help lay the foundation for how members of a profession should act in a given situation, and help build trust between members of that profession and the public (Davis 1991).1 Since their inception, professional codes of ethics have often sought to direct how practitioners gather, use, store, share, and ultimately dispose of their data. For instance, the American Anthropological Association, in their 1971 “Principles of

1  See the Ethics Codes Collection of Illinois Institute of Technology’s Center for the Study of Ethics in the Professions (http://ethicscodescollection.org), a digital repository of around 3000 professional codes that seeks to trace the development and use of ethics codes across many professions.

58

E. Hildt and K. Laas

Professional Responsibility.”2 discusses the paramount responsibility anthropologists have to the individuals being studied and goes on in 1(c) to outline, Informants have a right to remain anonymous. This right should be respected both where it has been promised explicitly and where no clear understanding to the contrary has been reached. These strictures apply to the collection of data by means of cameras, tape recorders, and other data-gathering devices, as well as to data collected in face-to-face interviews or in participant observation. Those being studied should understand the capacities of such devices; they should be free to reject them if they wish, and if they accept them, the results obtained should be consonant with the informant's right to welfare, dignity, and privacy.

The updated 2012 Statement on Professional Responsibility has greatly enlarged this, including a new focus on digital data. The use of digitalization and of digital media for data storage and preservation is of particular concern given the relative ease of duplication and circulation. Ethical decisions regarding the preservation of research materials must balance obligations to maintain data integrity with responsibilities to protect research participants and their communities against future harmful impacts.3

Besides the growing relevance of digital data management, this example of the comparison between the American Anthropological Association’s 1971 and 2012 statements on professional responsibility exemplifies how codes of ethics are works in progress and how these documents develop and expand over time. They respond to social and technological developments and are initiated or modified following disruptions of everyday professional practice (Metcalf 2014). The implementation of the General Data Protection Regulation (GDPR) in 2018 has also had a profound impact on guidelines and policies for businesses and industry associations that suddenly had to meet the expanded requirements for informed consent in handling the personal data of their users. In 2016, the European Union passed the GDPR, a legal regulation that seeks to protect individuals in contexts involving personal data collection and analysis. This legal framework went into effect in May 2018 and has had a profound impact on the use of digital data in sectors worldwide and on ethics codes relating to digital data management. The GDPR, “…applies to the processing of personal data in the context of the activities of an establishment of a controller or a processor in the European Union, regardless of whether the processing takes place in the European Union or not…” and to the processing of personal data of data subjects who are in the European Union, where the processing activities are related to the offering of goods and services or the monitoring of behavior (Article 3, GDPR). The primary goal of the GDPR is to protect the rights of data subjects by giving them insight into and control over the collection and processing of their personal data (Abiteboul and Stoyanovich 2019). In Chap. II of the GDPR, the regulation lays out fundamental principles relating to the processing of personal data: lawfulness, fairness, and transparency, purpose limitation, data minimization, accuracy, storage limitation,

 http://ethicscodescollection.org/detail/cf351392-1354-4c18-92cf-0bcdd1422fd1  Section 6. http://ethicscodescollection.org/detail/6d92a99d-a30a-4379-bf00-24d297dc8cc0

2 3

4  Informed Consent in Digital Data Management

59

integrity, and confidentiality, and accountability. In Chap. III, it lays out the rights of the data subject: the right to be informed, the right to access, the right to rectification, the right to erasure, the right to restrict processing, the right to data portability, the right to object, and rights in relation to automated decision-making and profiling (EU 2016/679). Unlike the GDPR, ethics codes and ethical guidelines are not legally binding. Whereas the GDPR lays out a general legal framework, ethics codes and ethical guidelines are much more context-specific. Often, they reflect in more detail about the meaning or significance of a particular principle or concept in their respective context. Insofar, even though legal regulations such as the GDPR clearly trump whatever may be written in a code of ethics, the guidance found in ethics codes clearly is reflective of the ethical aspects involved in the respective field.

4.3  E  thics Codes and Guidelines in Digital Data Management In guidelines, policies, and ethics codes developed in the many fields of digital data management, a wide variety of ethical principles and concepts are addressed, including (Supplementary Table 1): dignity, respect for persons and communities, informed decision-making and informed consent, transparency, beneficence, justice, risk minimization and fair distribution of benefits and risks, accountability, procedural fairness, non-discrimination, accessibility, dissemination, reciprocity, engagement, recognition and attribution, respect for law and public interest, authorship, ownership, and custodianship (see for example Averweg and O’Donnell 2007; Centre for Social Justice and Community Action, Durham University 2012; Dittrich and Kenneally 2012; Global Alliance for Genomics and Health 2014; Clark et al. 2015; Oxfam 2015). Among these, questions related to privacy and informed consent are frequently considered of vital importance. The various guidelines, ethics codes, and policies stress different concepts and principles, use different definitions for the various concepts and standards, and frame the concepts they use in different ways. They also discuss these elements in a variety of different contexts. Because of these factors, it is necessary to analyze the various documents in more detail for a more comprehensive discussion. In what follows, we shall focus our analysis on informed consent and informed-­ consent related standards. There are two reasons for this: First, informed consent is one of the most prominent standards in digital data management. It is also of central relevance in the GDPR. Second, for decades, informed consent has been playing a crucial role in a broad spectrum of online and offline management of personal data. Especially in the context of clinical practice and research involving humans, there has been a particularly high awareness of data management-related ethical issues both in non-digital and digital contexts.

60

E. Hildt and K. Laas

Supplementary Table 1  This is a list of the 19 central concepts and principles used when coding the 32 ethical documents in this study Concept/ Principle Confidentiality

Definition The treatment of information that an individual has disclosed to an organization/researcher in a relationship of trust, and the expectation (and duty of the organization to ensure) that the information will not be shared with others without permission in ways that are inconsistent with the understanding of the original disclosure. Confidentiality specifically pertains to data (Institute of Medicine 2009). Consumer control Overall control of data by consumer/user Data sharing Data that shared with organizations/individuals outside of the original collector of data. Gatekeeper Responsibilities of an individual/organization to the apply criteria of function informed consent in a way that protects the interests of individuals allowing the organization. Privacy The right of an individual to control the extent, timing, and circumstances of sharing one’s data, and to keep this data private. (GDPR, art. 20). Right to access From the GDPR, art. 15: The right to obtain from the controller confirmation as to whether or not personal data concerning him or her are being processed, and, access to the personal data. Right to be The right to be informed before any kind of activity or data collected informed commences that involves his or her personal data. Right to data From GDPR, article 20: The right to receive personal data from a vender, portability and transfer it to another vender - this helps keep the data subject informed about what data a vender has, as well and to prevent vender lock-in, thereby enabling a data subject to move to a new vendor without having to reconstruct her entire history Right to be From GDPR, article 17: The right to obtain from the controller the erasure forgotten of personal data concerning him or her without undue delay. Right to object From GDPR, article 21: The right to object, on the groups relating to his or her particular situation, at any time to processing of personal data in certain situations. Right to From GDPR, article 16: The right to obtain from the controller without rectification undue delay the rectification of inaccurate personal data concerning him or her. Right to restriction From GDPR article 18: The right to limit ways in which an organization of processing uses their data/ right to withdraw From GDPR, article 22: The right not to be subject to a decision based Rights related to solely on automated processing, including profiling, which produces legal automated effects concerning him or her or similarly significantly affects him or her. decision-making Transparency Information on the relevant aspects of data collection and data management has to be accessible and provided in a clear, comprehensible, and accessible way (Turilli and Floridi 2009). Type/amount of The various kinds and scope of data being collected about a user. This may data collected include health information, economic information, and various kinds of personal information over a period of time. Unanticipated use The use of data in a way that is not outlined by the ethics document or of data policy – For instance, by an outside organization or for a different research study or use not explicitly stated. (continued)

4  Informed Consent in Digital Data Management

61

Supplementary Table 1 (continued) Concept/ Principle Voluntariness Vulnerable populations

Definition The user has provided access to his or her data freely and without coercion or undue influence. Groups or communities at a higher risk of harm as a result of limitations due to age, mental ability, social, economic, or political status. This often includes children, individuals with mental disabilities, prisoners, and other groups facing socio-economic disadvantages (Ruof 2004)

In medicine, a long tradition of policies and guidelines relating to data management exists, including, most prominently, the Declaration of Helsinki and the Belmont Report (World Medical Association 1964/ 2013; National Commission for the Protection of Human Subjects of Biomedical and Behavioral Research 1978; UNESCO 2003; Global Alliance for Genomics and Health 2014). Documents devised in this field may prove helpful for developing policies and guidelines relating to other fields of digital data management. A significant example of this strategy is the Menlo Report - Ethical Principles Guiding Information and Communication Technology Research (Dittrich and Kenneally 2012). Developed for the Department of Homeland Security to provide a framework for ethical guidelines for computer and information security research, it relies on the Belmont Report issued in 1979, which identifies three basic ethical principles underlying research with human subjects: respect for persons, beneficence, and justice (National Commission for the Protection of Human Subjects of Biomedical and Behavioral Research 1979). In the following section, we will explore informed consent in the medical context to better understand the longstanding tradition of this concept and how informed consent-­ related standards can be and have been applied to digital data management.

4.4  Models of Practice: Informed Consent Informed consent in the medical context is the requirement of a formal agreement by a patient to permit a healthcare intervention after having been provided adequate information on the context, risks, and benefits. The concept of informed consent has a long tradition in medicine and research involving human subjects (Nuremberg Code 1949; World Medical Association 1964/ 2013; National Commission for the Protection of Human Subjects of Biomedical and Behavioral Research 1979; Faden and Beauchamp 1986; Mason and O’Neill 2017). In medicine and research involving human subjects, several aspects of the concept of informed consent are of central relevance: transparent information has to be provided on the relevant aspects, benefits and risks; the informed consent has a gatekeeper function, i.e., it is to be given before anything else happens; a waiver is possible; participants can quit at any time without negative implications; on request of the participant, the data collected has to be destroyed; the data collected is to be used only for the purpose or purposes specified; if the data is to be used in additional contexts (data sharing), informed

62

E. Hildt and K. Laas

consent is needed; the data collected is stored only for a limited duration of time that is clearly specified; and special protection for non-competent individuals (children, etc.) has to be in place. Informed consent has also been considered of central relevance in the context of information and communication technology. Notably, “The Menlo Report – Ethical Principles Guiding Information and Communication Technology Research” (Dittrich and Kenneally 2012), using the Belmont Report as a basis, discusses respect for persons and informed consent as one of the central standards governing information and communication technology research. The Menlo Report proposes a framework for ethical guidelines to be used in research about or involving information and communication technology (ICT), and discusses four core ethical principles, and reflects on the role of these principles in the context of ICT: Respect for Persons; Beneficence; Justice; and Respect for Law and Public Interest. The Menlo Report restates the principle of Respect for Persons in the context of Information and Communication Technology Research (ICTR) as follows: Respect for persons: “Participation as a research subject is voluntary, and follows from informed consent; Treat individuals as autonomous agents and respect their right to determine their own best interests; Respect individuals who are not targets of research yet are impacted; Individuals with diminished autonomy, who are incapable of deciding for themselves, are entitled to protection.” (p.5)

Thus, the Menlo Report considers informed consent and informed consentrelated aspects as central standards governing information and communication technology research based on the principle of respect for persons. It stresses the overall relevance of informed consent in research involving digital data by drawing direct connections to the medical context. Furthermore, the Menlo Report states that (Dittrich and Kenneally 2012, p. 7): “In the ICTR context, the principle of Respect for Persons includes consideration of the computer systems and data that directly interface, integrate with, or otherwise impact persons who are typically not research subjects themselves.” In the study described in the following, we shall analyze and discuss the relevance and role of informed consent and informed consent-related standards in the context of ethics codes and guidelines referring to ICT and digital data management.

4.5  Study Methodology This study examines 31 different codes of ethics and guidelines (see appendix 2) from the Ethics Codes Collection held by the Center for the Study of Ethics in the Professions at the Illinois Institute of Technology (http://ethicscodescollection.org). This publicly available collection includes around 3000 normative documents from approximately 1750 different institutions. While not fully representative of all the ethics codes and guidelines from the various fields of digital data management, this

63

4  Informed Consent in Digital Data Management

cross-section of ethics codes from government (European, U.S., and international), business and industry associations (marketing, management, and social media), and non-governmental and professional associations (social aid societies, as well as professional associations from the areas of computer science, health, and information sciences) provides a set of representative ethics codes that guide how to handle digital data, and what ethical principles they draw upon in providing this guidance. The database was searched using a keyword search looking for guidelines that included at least one or more of the terms “informed consent,” “data,” “digital data,” “privacy,” or “confidentiality.” Documents needed to mention the handling of data from human subjects explicitly, and at least in part discuss principles, mechanisms, and strategies for handling potentially confidential data from users, patients, or data subjects. In cases where there were multiple versions of the document in the collection, we opted to use the most recent version. From our initial set of 43 documents, we narrowed the set to 31 individual documents based on the above set of criteria. Our final set of documents were developed from 2002 to 2020 and represent a broad swath of institutions, sectors, and fields (Fig. 4.1). The remaining 31 documents were divided into four different categories: business and industry associations that include organizations such as Accenture, Facebook, and the Mobile Marketing Association; government organizations, like the U.S. Federal Trade Commission and UNESCO; non-governmental/educational organizations such as Oxfam and the University of Melbourne’s Carlton Connect Initiative (which does health research); and professional associations such as the Association for Computing Machinery. In the analysis of documents, we used the methodology of qualitative content analysis. Both of the authors began in March of 2020 by reading four example documents, and, based on the various aspects of the standard of informed consent in medical contexts, principles governing the use of data in human subjects research, and data ethics, in particular, the principles and rights from the GDPR, developed an initial list of preliminary informed consent-related ethics topics (see Supplementary Table 1). We then engaged in reliability testing, with both coders individually coding the four documents and comparing results. Through discussion, we improved our final set of codes to 18 that represent the various aspects of informed consent. The coding process began in April of 2020. All codes entered by the authors were collected in a spreadsheet individually. Each document received a score between 0 and 2 for each of the 18 codes, with 0 referring to an absent code, 1 referring to a Sector Business and Industry Associations Government and Intergovernmental Organizations Non-Governmental Organizations Professional Associations Fig. 4.1  Number of Documents in Sample by Sector

Number of Documents 9 9 6 7

64

E. Hildt and K. Laas

minor reference to the code, and 2 to a substantial or developed reference to the code. A substantial reference includes a paragraph or heading relating to the code, and a minor reference might consist of only the mention of the term, with little or no development of the topic. Our coding strategy took into account the length of the document. Therefore, in a one-page set of guidelines, a single bullet point or sentence would be counted as a 2, whereas a 2 would warrant a paragraph or more in a ten-page document (Supplementary Table 2). To ensure the reliability of the coding, each document was coded by the two authors separately, followed by a reconciliation process during which the coders discussed differences and attempted to reconcile differences. Limitations of this study need to be kept in mind when interpreting the results. Our data set included ethics codes, policies, and guidelines available to the Ethics Codes Collection (ECC). While the ECC is the most extensive collection of professional, business, and governmental codes of ethics and guidelines in the world, it by no means represents the entirety of the digital ethics landscape. Codes are only added to the ECC when copyright permission can be obtained. Otherwise, a link is added to the collection leading the user to a version posted on the authoring institution’s website. This approach limited our analysis to only published and publicly available documents. Internal documents developed by businesses and other institutions were not included in the study. Our collection of documents also only included those available in the English language, which ensures that our dataset does not adequately represent ethics codes from non-western countries.

4.6  O  verview – Appearance of Informed Consent-Related Standards in Ethics Codes and Guidelines The following divides our sample of documents that discuss informed consent in digital data management by sector. You can see the mean score of all the documents in that sector for each principle/concept we coded for in each table (Figs.  4.2 and 4.3).4 All four sectors scored the highest in areas of data sharing and the type/amount of data collected, and lowest in the areas of automated decision-making, the right to data portability and the right to object. Traditionally, issues of data sharing and the type/amount of data being collected have been a critical component of traditional concepts of informed consent in the biomedical fields, regardless of how the data is being collected. Institutional review board protocols include questions about these research issues that fall under these regulations under the Common Rule. In areas of business, the Federal Trade Commission uses several regulations (such as the Children’s Online Privacy Protection Act, the Fair Credit Reporting Act, and the Can-SPAM Act, to name a few) (FTC 2019). Issues of data portability and the right  See Supplementary Table 1 for a list and definitions for the 19 codes being used here.

4

4  Informed Consent in Digital Data Management

65

Supplementary Table 2  Informed consent guidelines and policies examined in study 1 2

3

4

5

6

7

8

9

Accenture American Anthropological Association Association for Computing Machinery

Universal Principles of Data Ethics Principles of professional responsibility 2018 ACM code of ethics and professional conduct: Draft 3

2012

2018

2019 Ethical decision-making and internet research: Recommendations from the AoIR ethics working committee Association of National Guidelines for ethical business 2019 Advertisers practice Association of Internet Researchers

Canada, Office of the Privacy Commissioner of Canada Canadian marketing association Carlton connect initiative, university of Melbourne Digital advertising Alliance

Guidelines for obtaining meaningful consent

2002

Code of Ethics (2004)

2019

Guidelines for the ethical use of 2015 digital data in human research Self-regulatory principles for online behavioral advertising

10 Digital analytics association

Web Analyst’s code of ethics

11 European Commission, European data protection supervisor 12 European Commission, industry partners

European data protection supervisor opinion 4/2015: Towards a new digital ethics Draft code of conduct on privacy for Mobile health applications Data policy

13 Facebook 14 Global Alliance for genomics and health 15 Google 16 Global privacy assembly 17 Human genome organization

Framework for responsible sharing of genomic and health-related data Google privacy policy International standards on the protection of personal data and privacy Statement on human genomic databases

Business Management Professional association – Social sciences Professional association – Computer science Professional association – Information sciences Industry association marketing Government agency

Industry association – Marketing NGO/higher education- research

2009

Industry association – Marketing Undated Industry association – Marketing 2015 Government agency

2016

Government/business partnership

2018

Business – Social media Standards setting agency – Health

2012

2020 2009

2002

Business – Social media Professional association – Information sciences Professional association – Health (continued)

66

E. Hildt and K. Laas

Supplementary Table 2 (continued) 1 Accenture 18 Interactive advertising bureau 19 International Committee of the red Cross 20 Mobile marketing association 21 National Research Council Canada 22 National Information Standards Organization 23 Organization for Economic Cooperation and Development 24 Organization for Economic Cooperation and Development 25 Organization for Economic Cooperation and Development 26 Oxfam

Universal Principles of Data Ethics IAB code of conduct

2018

Rules on personal data collection

2020

Mobil marketing association global code of conduct

2008

Draft code of ethics for community informatics researchers NISO privacy principles

2007

2015

OECD privacy framework

2013

The protection of children online.

2012

Guidelines for human biobanks and genetic research databases

2009

27 PrivacySig

Oxfam responsible program data policy Code of conduct

28 Twitter

Privacy policy

29 United Nations, educational, scientific, and cultural organization 30 United States, department of homeland security 31 United States, federal trade commission

International declaration on human genetic data

Business Management Industry association – Marketing NGO – Health

Industry association – Marketing Government agency

Standard setting agency – Information sciences Government – International Government

Government – Intergovernmental agency 2015 NGO – Social advocacy Undated Industry association marketing 2020 Business – Social media 2003 Government – Intergovernmental agency

Menlo report

2012

Government agency

Final FTC privacy framework and implementation recommendations

2012

Government agency

These documents represent materials contained in the Ethics Codes Collection by the Center for the Study of Ethics in the Professions at the Illinois Institute of Technology (http://ethicscodescollection.org) that directly deal with digital data management. They do not represent the entirety of guidelines and policies that deal with informed consent and digital data but do provide an illuminating overview of the kinds of documents that exist

4  Informed Consent in Digital Data Management

Privacy/Survei llance

Right to be Informed

Transparency

Type/Amount of Data Collected Unanticipated Use of Data

Vulnerable Populations

Voluntariness

Consumer Control

1.9

0.7

1.9

1.3

1.3

1.8

1.4

1.1

1.4

1.7

1

0.8

1.1

0.3

0.6

1.7

0

0.9

1.9

1.6

1.9

1.6

2

1.7

1.1

1.4

1.7

1.4

1.1

0.4

0.9

0.7

0.9

0.6

0.1

NGO

2

2

2

2

1.8

1.8

2

1.5

1.3

1.7

1.3

1.3

0

0.8

0.8

0.5

0.7

0.3

Professional

1.3

1.9

1.6

1

2

1.7

1.7

0.6

0.9

1.1

1.6

0.4

0

0

0.3

0.3

1

0.4

Right to Object

Right to be forgotten

Right to Access

Right to Restrict Processing Automated Decisionmaking

Gatekeeper Function

0.6

Government

Right to Rectification

Data Sharing

Business

Sector

Right to Data Portability

Confidentiality

67

Fig. 4.2  Mean score by sector. (Codes in yellow represent different aspects of consumer control, as outlined by the GDPR)

Mean Score of Total

Business sector includes 9 documents

2 1.5 1 0.5

0

Informed Consent Principles and Factors

Fig. 4.3  Categorization of informed consent-related codes by sector. (Codes in lighter colors represent different aspects of consumer control, as outlined by the GDPR)

Mean Score of Total

Government sector included 9 documents

2 1.5

1 0.5 0

Informed Consent Principles and Factors

Fig. 4.3 (continued)

68

E. Hildt and K. Laas

Mean Score of Total

NGO sector included 6 documents

2

1.5 1 0.5 0

Informed Consent Principles and Factors

Fig. 4.3 (continued)

Mean Score of Total

The Professional sector includes 7 documents

2 1.5 1

0.5

0

Informed Consent Principles and Factors

Fig. 4.3 (continued)

to opt-out of automated decision-making are seen at a much lower rate due to the attention being drawn to these issues relatively recently through the GDPR implemented in 2018 and the relatively recent growth of digital data in business, governmental, and health decision-making. As professional associations and governments continue to update these guidelines and professional codes, further attention will hopefully be paid to these critical issues. Data portability is also likely a less relevant topic in some professional fields where the data being gathered probably belongs to research subjects rather than consumers who have legitimate reasons for wanting to move their data from one platform to another. When looking at business and professional association guidelines and policies, these score high in many codes drawn from the GDPR and on several elements of consumer control but score lower in the areas of confidentiality, gatekeeper functions, and transparency. This may reflect the driving need to meet the demands of

4  Informed Consent in Digital Data Management

69

national and international regulations rather than the other sectors who often come from a life or social sciences research background, and thereby include more traditional biomedical and research-based principles of informed consent like confidentiality, the duties of the researcher as a gatekeeper, and issues of transparency. Government guidelines for handling digital data scored high in data sharing, privacy/surveillance, and transparency but scored in the middle on consumer control and many of its related aspects. These guidelines were published in the years 2002-2012 and, therefore, do not reflect more expanded notions of consumer control in the ethical handling of digital data and informed consent. However, issues of privacy and transparency have been on the radar of U.S. governmental institutions for close than 50 years with the passage of the Privacy Act of 1974 (U.S.C. § 552a) that established a code of fair information practices that govern the collection, maintenance, use and dissemination of information about individuals that federal agencies maintain. NGO documents – many of which deal with either health, human research, or social advocacy – score highest in the areas of confidentiality, data sharing, gatekeeper function, privacy/surveillance, and the type/amount of data collected. Again, this likely stems from field-specific principles arising from biomedical research and other traditions arising from human subject research. In general, the professional codes of ethics scored relatively high in terms of principles that appear in traditional models of practice but score lower on codes drawn from new regulation and guidelines – such as issues surrounding consumer control.

4.7  A  Closer View on Informed Consent-Related Standards in Ethics Codes and Guidelines In what follows, we’ll discuss in more detail the role granted to some of the informed consent-related standards in information and communication technology and digital data management. Given the broad spectrum of informed consent-related standards, it is not possible to elaborate on all the aspects represented in the coding approach described above in this chapter. Instead, we’ll focus on some prominent examples. These relate to informed decision-making and informed consent’s gatekeeper function, transparency, consumer control, type and amount of data collected, and data sharing.

4.7.1  Gatekeeper Function As in medicine, informed consent in digital data management has a gatekeeper function, i.e., informed consent must be given before any kind of activity or data collection commences. However, there are several differences between these fields:

70

E. Hildt and K. Laas

In medicine, informed consent presupposes a doctor-patient relationship and typically requires a medical doctor to convey relevant information in a conversation to a patient. This interchange allows the doctor to ask questions and to verify whether the patient has understood the information. However, none of this is possible in digital data management. Here, individuals give their consent by clicking a button, often with the details of how their personal data will be gathered, utilized, and possibly sold buried in a “terms and conditions” agreement. There is usually no face-to-face interaction in online environments, and it is not even clear whether the users have read and understood the information provided (Clark et al. 2015). In this context, the European Data Protection Supervisor Opinion 4/2015: Towards a new digital ethics stresses that, as human beings are not entirely rational, the fact that individuals have given informed consent for the processing of their personal information does not entitle others to unlimited use: “Under EU law, consent is not the only legitimate basis for most processing. Even where consent plays an important role, it does not absolve controllers from their accountability for what they do with the data, especially where a generalized consent to processing for a broad range of purposes has been obtained.”

In Western medicine, there is a general agreement that informed consent, usually written consent, is required, except in emergency situations. In emergencies in which the individuals receiving medical treatment are not able to give informed consent, proxy consent is considered an alternative. Furthermore, in special situations, a waiver is possible. In contrast, in digital data management, there is a broader spectrum of positions. The Code of Ethics for Community Informatics Researchers (Averweg and O’Donnell 2007, p.  2-3) expresses a traditional view, similar to medical conventions. It requires that research should commence only after free and informed consent has been given, ordinarily in writing, by prospective participants. However, there are also more liberal views concerning the need to obtain informed consent. For example, The OECD Privacy Framework (2013) states in the Collection Limitation Principle that “there should be limits to the collection of personal data and any such data should be obtained by lawful and fair means and, where appropriate, with the knowledge or consent of the data subject.” The question, of course, is when it is appropriate or when it may not be appropriate or not necessary to obtain consent. Of relevance here is the context of data collection, which may make it difficult to obtain informed consent, such as passive methods of collecting data where no interaction can take place during which a user can give or deny consent. The Menlo Report discusses the possibility of researchers seeking waivers of informed consent in those cases in which obtaining informed consent would make it impossible to achieve research objectives. Accordingly, this requires that (Dittrich and Kenneally 2012, p. 8): “(1) The research involves no more than minimal risk to the subjects; (2) The waiver or alteration will not adversely affect the rights and welfare of the subjects; (3) The research could not practicably be carried out without the waiver or alteration; and (4) Whenever

4  Informed Consent in Digital Data Management

71

appropriate, the subjects will be provided with additional pertinent information after participation.”

The Menlo Report mentions situations in which it would be too difficult to identify all individuals from whom consent should be sought or to practicably obtain consent as situations in which a waiver of informed consent or a waiver of documentation of informed consent may be the only option. For example, in a communication traffic modeling study, it may not be feasible to obtain consent from millions of users. However, there are also more ambiguous situations or contexts in which data may be collected or analyzed in ways individuals are not aware of. This may be the case when digital data is used in unanticipated ways without asking those who contributed for their consent (Clark et  al. 2015; Zimmer 2010). For example, when research is done based on material posted on social media. There is no consent yet on how to deal with situations like this. On the one hand, some have claimed that there is no need for informed consent because the material posted on social media can be accessed freely online (Zimmer 2010). This position can be seen as being backed up by the American Anthropological Association’s Ethics Statement (2012). It states, “…the observation of activities and events in fully public spaces is not subject to prior consent”, and therefore it may be concluded by analogy that prior consent may also not be needed for the observation of public internet spaces such as openly accessible forums or social media. It is questionable whether this analogy is valid; however, especially as social media research involves data resulting from systematic data collection, which would be much more difficult to obtain in public spaces. Furthermore, re-identification issues may arise, as mentioned above, in the 2008 study involving Facebook accounts (Zimmer 2010). On the other hand, the fact that data pertaining to individuals that may include personally identifiable information is collected without their knowledge clearly is a problem in itself. In view of this, some have argued that various possibilities for obtaining consent for research involving social media posts may be available, such as contacting those who wrote the posts and asking for permission or gaining consent from respective groups beforehand (Clark et al. 2015).

4.7.2  Transparency Transparency refers to the requirement that information on the relevant aspects of data collection and data management must be provided in a clear, comprehensible, and accessible way. This also includes potential risks. Another transparency requirement is that users are aware of what actions are performed, for example, that users know about data collection taking place. Transparency is considered central relevance by various ethics codes and guidelines, especially in digital data analytics and health informatics (see, for example:

72

E. Hildt and K. Laas

Global Alliance for Genomics and Health 2014; Digital Analytics Association 2011; European Commission 2016). The Web Analyst’s Code of Ethics by the Digital Analytics Association (2011) strongly advocates transparency for practitioners who adhere to their ethics code, stating, “I agree to educate my clients/employer about the types of data collected, and the potential risks to consumers associated with those data.” The Code of Ethics requires practitioners to encourage their clients and employers to fully disclose consumer data practices in clear language and educate these parties in how technologies could be perceived as invasive. The Global Alliance for Genomics and Health stress in their Framework for Responsible Sharing of Genomic and Health-related Data (2014) that information has to be developed and provided on the purposes, processes, procedures, and governance frameworks involved, and that the information provided “should be presented in a way that is understandable and accessible in both digital and non-digital formats.” (p.4). Concerning the requirement to present transparent information, the European Commission’s Draft Code of Conduct on Privacy for Mobile Health Applications (2016, p. 7) says: “Note that consent requires that users have been provided with clear and comprehensible information first. Key information shall not be embedded in lengthy legal text.” These paragraphs relate to the well-known challenges to transparency that exist in the presentation of information in ways that are difficult to understand, especially the provision of long and complex “terms and conditions” texts with a lot of details. This may lead to the majority of users failing to read the information and people just clicking on “accept” to get rid of it. In the context of online behavioral advertising, the Interactive Advertising Bureau’s IAB Code of Conduct uses a narrow and rather indirect approach to transparency. Whereas the code of conduct says in the section on transparency that, “Third Party and Service Providers should give a clear, meaningful, and prominent notice on their own websites that describe in detail their data collection and use practices,” on the page that contains the advertisement and where the data is collected, only a clear, meaningful, and prominent link to the above disclosure has to be provided. Thus, this is an indirect procedure that requires the consumer to find the link and follow it to the related homepage of the third party that offers disclosure on data collection. Transparency and explicit information transfer require that individuals, first and foremost, know that their data is being collected. Without this knowledge, informed consent simply is not possible.

4  Informed Consent in Digital Data Management

73

4.7.3  Consumer Control Important, informed consent-related standards relate to the requirement that consumers/users are able to control whether or not to participate in research activities or to allow data collection and to exert control over the ways their data is being used. The GDPR specifies the right to access, the right to rectification, the right to erase (or the right to be forgotten), the right to restrict processing, the right to data portability (the right to shift your data from one service provider to another by moving personal data), the right to object, and rights in relation to automated decision-­ making and profiling. Particular challenges arise when consumers are not aware of their choices or when parties assume tacit consent. Opt-in and opt-out mechanisms are ways that attempt to deal with this problem. In order to avoid a situation in which users are not aware of their data being collected, the National Information Standards Organization (NISO) write in their NISO Privacy Principles (2015) in the section on informed consent: “The default approach/setting should be that users are opted out of library services until they explicitly choose to opt-in.” Whereas the NISO Privacy Principles assume that the standard approach is that users are opted out and take steps to actively opt-in if they want so, the Web Analyst’s Code of Ethics by the Digital Analytics Association focuses on user’s ability to actively opt-out of data collection practices  – implying that the default setting is opt-in. It says in a paragraph on consumer control: “I agree to inform and empower consumers to opt-out of my clients/employer data collection practices and to document ways to do this. To this end, I will work to ensure that consumers have a means to opt-out and to ensure that they are removed from tracking when requested.”

While in general, the availability of an opt-out option is considered central for consumer control, depending on the context, consumer control may be more or less challenging to achieve. In particular, issues may arise when the default option is an opt-in option, or when opt-out options are offered of which users may not be aware of. In online behavioral advertising, the default option is that a consumer’s data are collected. The Interactive Advertising Bureau’s IAB Code of Conduct states that “A Third Party should provide consumers with the ability to exercise choice with respect to the collection and use of data for Online Behavioral Advertising purposes or the transfer of such data to a non-Affiliate for such purpose.” In reality, however, consumers may not be aware of this opt-out option, which may be challenging to find. The same is true for consumer device-based data collection in and around retail shops. The consumers may be unaware of data collection, even if, as suggested by the PrivacySIG Code of Conduct, the retail shops use stickers “to signal to the shopper that Retail Intelligence is being practiced around this location.” PrivacySIG is a Special-Interest Group consisting of companies active in retail intelligence. For organizations subscribing to this opt-out approach, store customers must find the

74

E. Hildt and K. Laas

notices about the tracking system being used, navigate to the opt-out page offered by PrivacySIG, and then enter their unique MAC address to opt-out of any future tracking by organizations in the PrivachSIG. Similar suggestions are made by the Future of Privacy Forum (2013). Clearly, this assumes a high level of diligence, action, and comprehension on the part of the individual customer.

4.7.4  Type and Amount of Data Collected In general, most guidelines we surveyed specified that only data that is relevant for a particular purpose should be collected. For example, the OECD Privacy Framework (2013) say in their “Data Quality Principle,” that, “Personal data should be relevant to the purposes for which they are to be used, and, to the extent necessary for those purposes, should be accurate, complete and kept up-to-date.” The NISO Privacy Principles (2015) distinguish between different types of personal data, saying that certain types of personal data (for example on gender, race, socioeconomic status, ability) are considered more sensitive, and therefore the decision to collect and use them should require higher levels of scrutiny and justification, and, once collected, the data should receive extra protection. Concerning the amount of data to be collected, the authors of the Accenture: Universal Principles of Data Ethics (p. 8) write that collecting data just for the sake of more data may complicate analysis, and goes along with risks and unpredictable harmful future consequences. Several ethics codes and guidelines stress that personal data collected is to be used only for the purpose specified. The individuals gave their consent (OECD Privacy Framework 2013; NISO Privacy Principles 2015; Accenture Universal Principles of Data Ethics, n.d.).

4.7.5  Data Sharing While digital data can be easily shared, there is general agreement that (with the exception of cases of law enforcement) personal data should not be shared without the informed consent of those to whom the data pertain. Whenever personal data is used in additional contexts, informed consent is needed (American Anthropological Association 2012, Dittrich and Kenneally 2012; OECD Privacy Framework 2013). For example, the Menlo Report (p.  7) states: “[…] informed consent for one research purpose or use should not be considered valid for different research purposes.” The authors of the Accenture Principles for Data Ethics also direct attention to the reuse of data sets. In principle 2, they say: “Correlative use of repurposed data in research and industry represents both the greatest promise and greatest risk posed by data analytics.”

4  Informed Consent in Digital Data Management

75

However, the European Commission’s Draft Code of Conduct on Privacy for Mobile Health Applications (2016) outlines that secondary processing of data for historical, statistical, or scientific purposes, even when these purposes were not originally communicated, may still be possible with anonymized or pseudonymized data: "Any processing of personal data must be compatible with the purposes for which you originally collected the personal data, as communicated to the users of your app. Secondary processing of the data for historical, statistical or scientific purposes (assuming that these purposes were not originally communicated) is, however still considered as compatible with original purposes if it is done in accordance with any national rules adopted for such secondary processing. This means that, in order to process data for such secondary purposes, you will need to determine which national laws apply, and respect any restrictions."

4.8  Discussion This chapter reflects on the role of informed consent and informed consent-related standards in codes of ethics and guidelines pertaining to digital data management. The ethics codes and guidelines reveal the informed consent-related aspects considered of relevance in the respective contexts and provide details on the roles informed consent-related standards have in the respective areas. They are not legally binding, however, and are always subordinated to the respective legal framework. Furthermore, in some of the ethics codes and guidelines, modifications that adjust to recent legal changes may be expected in the not-too-distant future. Overall, in our analysis, we found that in most of the ethics codes, policies, and guidelines examined, informed consent and informed consent-related standards are considered of relevance, and a transfer of informed consent-related standards from medicine to digital data management is taking place. Central relevance is allotted, especially in the context of digital data management in (health-related) research. However, in other contexts, such as marketing or mobile applications, we found the standards modified, weakened, or broadly reshaped. Examples include parties assuming tacit consent or offering only opt-out options of which users may not be aware. There is also a limited understanding of what should be considered personally identifiable information that seems to either exclude Mac or IP addresses or to be sure to pseudonymize the “unique data” (PrivacySIG n.d.) or to de-identify or de-personalize personal information or unique device information as soon as technically possible (Future of Privacy Forum 2013). This observation is in line with the results of a 2005 study that examined the privacy policies of 22 online retailers and online travel agencies. The author Irene Pollach (2005) found a high level of complexity in the language used and states that companies “… benefit from obfuscating, mitigating, and enhancing data handling practices in that this helps them to obtain data they would not have access to if users were fully informed about data handling practices.“(Pollach 2005, p. 232).

76

E. Hildt and K. Laas

Even when informed consent documents exist, misconceptions can still arise, as shown in a 2012 article by Erika Check Hayden in the journal Nature entitled “Informed Consent: A broken contract” (2012). The article discusses a case of the gene-testing company 23andMe, which asked participants to sign an informed consent document allowing their data to be used in research, and that this research might lead to the company patenting and commercializing products or services. Despite this, confusion occurred, illustrating the divide between researchers and companies and the public in how they understand their data is likely to be used. The article goes on to outline some options to improve transparency, including researchers sending participants regular emails documenting how their data is being used, relying on individuals uploading their own data, and how future technologies might allow participants to track their use of data over time (Hayden 2012). Overall, in digital data management, a central issue centers around the question of how difficult it is for users to make free and well-informed decisions concerning their personal data and to exert effective user control. Lack of transparency and conditions impeding effective user control contribute considerably to this problem. However, behavioral and cognitive factors also play a role. Alessandro Acquisti and Jens Grossklags (2005) found that many parameters affect an individual’s privacy decision-making, including inconsistencies in discounting (preferring to opt for a reward received sooner than avoiding later negative consequences later) that lead to under-protection and over-release of personal information. The authors stress that individuals may lack information to make privacy-related decisions, and even when they have sufficient information, they likely trade long-term privacy for short-term benefits (Acquisti and Grossklags 2005). Daniel J. Solove discusses several cognitive problems that impede privacy self-­ management (Solove 2013, p. 1888): “(1) people do not read privacy policies; (2) if people read them, they do not understand them; (3) if people read and understand them, they often lack enough background knowledge to make an informed choice; and (4) if people read them, understand them, and can make an informed choice, their choice might be skewed by various decision-making difficulties.” As the above reflections show, it is necessary to raise users’ awareness of the relevance of privacy and the possibilities of data protection and user control. On the other hand, there is a clear need for institutions involved in digital data management to increase transparency and develop ethics codes, policies, and guidelines that include effective informed consent-related standards. A transfer of the model of informed consent to digital data management comes with chances and limitations. The transfer is most evident in those digital data management fields involving human subjects research; it comes with some strains in other fields such as online marketing or commercial data uses. Whereas individual autonomy has been considered central in medicine and in research involving humans for decades, in commercial contexts, the focus is less on individual autonomy but more on a company’s financial interests. Overall, however, it is a more than plausible assumption that similar activities around personal data management involve similar ethical issues and require similar

4  Informed Consent in Digital Data Management

77

strategies, independently of whether they rely on digital or non-digital data management. While it may not be possible to transfer without adjustments the informed consent model rooted in medicine to digital data management, the informed consent model certainly serves as an essential reference point. The model delineates a high standard that helps to protect the users’ privacy and autonomy. The model of informed consent can provide guidance for digital data management, such as: –– attempt to obtain consent in as many contexts as possible, even if this may not always be feasible; –– prefer opt-in options over opt-out options; –– provide comprehensive and transparent information so that the users are enabled to make an informed decision; –– seek to collect as little personally identifiable information as possible; –– only keep data as long as necessary; –– de-identify data if this does not render the data unusable for its intended purpose. One of the issues to be further discussed is which kinds of data collection require informed consent. Whereas it is generally agreed that personally identifiable information that allows us to identify, trace, or contact a person requires the person’s informed consent, the same may hold for non-identifiable data that tracks individuals’ behavior. The latter applies to data collection and analysis for purposes like economic benefit or political influence. To the maximum extent possible, individuals should know what information pertaining to them is planned to be used for, and they should be allowed to agree or disagree and opt out of these potential uses.

4.9  Conclusion Even though our analysis is not exhaustive, it can be said that up to now, only a limited number of guidelines and ethics codes on digital data management are available. For sure, there is space for more documents covering ethical issues in digital data management. As technological change is going on and new ways to collect, use, share, and dispose of digital data are evolving, there is a need to reflect on past and current practices, rethink existing priorities, and reflect on future ethical guidance. Existing ethics codes and guidelines in digital data management help to raise awareness of ethical issues in the field and may serve as a starting point for further developments. Overall, ethics codes, policies, and guidelines may help to develop a framework that organizations and bodies can use to guide their collection, use, sharing, and disposal of digital data. Acknowledgments  This research was funded through a generous grant from the John D. and Catherine T. MacArthur Foundation.

78

E. Hildt and K. Laas

References Abiteboul, Serge, and Julia Stoyanovich. 2019. Transparency, Fairness, Data Protection, Neutrality: Data Management Challenges in the Face of New Regulation. Journal of Data and Information Quality 11 (3): 15. https://doi.org/10.1145/3310231. Accenture. n.d. Universal Principles of Data Ethics. https://www.accenture.com/ t20160629T012639Z__w__/us-­en/_acnmedia/PDF-­24/Accenture-­Universal-­Principles-­Data-­ Ethics.pdf. Accessed 29 Mar 2019. Acquisti, Alessandro, and Jens Grossklags. 2005. Privacy and Rationality in Individual Decision Making. IEEE Security & Privacy 3 (1): 26–33. American Anthropological Association. 1971. Principles of Professional Responsibility. http:// www.americananthro.org/ParticipateAndAdvocate/Content.aspx?ItemNumber=1656. Accessed 1 Apr 2019. ———. 2012. Principles of Professional Responsibility. Available at: http://ethics.americananthro.org/category/statement/. Accessed 8 Mar 2019. Average, Udo, and Susan O’Donnell. 2007. Code of Ethics for Community Informatics Researchers. The Journal of Community Informatics 3 (1) http://ci-­journal.net/index.php/ciej/ article/view/441/307. Accessed 26 Mar 2018. Cellan-Jones R. 2018. Facebook data – as scandalous as MPs’ expenses? BBC News. 19 March. http://www.bbc.com/news/technology-­43458110 Accessed 4 Apr 2018. Center for the Study of Ethics in the Professions, Illinois Institute of Technology. 2018. Ethics Codes Collection. http://ethicscodescollection.org. Accessed 25 February 2020. Centre for Social Justice and Community Action, Durham University. 2012. Community-based participatory research: A guide to ethical principles and practice. http://www.livingknowledge.org/fileadmin/Dateien-­Living-­Knowledge/Dokumente_Dateien/Toolbox/LK_A_CBPR_ Guide_ethical_principles.pdf. Accessed 17 Dec 2017. Clark, Karin, Matt Duckham, Marilys Guillemin, Assunta Hunter, Jodie McVernon, Christine O’Keefe, Cathy Pitkin, Steven Prawer, Richard Sinnott, Deborah Warr, and Jenny Waycott. 2015. Guidelines for the Ethical use of Digital Data in Human Research. Melbourne: The University of Melbourne, Melbourne School of Population and Global Health. https:// www.carltonconnect.com.au/wp-­content/uploads/2015/06/Ethical-­Use-­of-­Digital-­Data.pdf Accessed 12 February 2018. Davis, Michael. 1991. Thinking Like an Engineer: The place of a code of ethics in the practice of a profession. Philosophy and Public Affairs 20 (2): 150–167. Davis, Michael. 2015. Codes of Ethics. In Ethics, Science, Technology and Engineering, ed. J.B. Holbrook and C. Mitcham, 2nd ed., 380–383. Farmington Hills: Gale, Cengage Learning. Denham, Elizabeth. 2016. Information Commissioner updates on WhatsApp / Facebook investigation. In: ICO Information Commissioner’s Office Blog. 7 November. https://ico.org. uk/about-­the-­ico/news-­and-­events/blog-­information-­commissioner-­updates-­on-­whatsapp-­ facebook-­investigation/ Accessed 17 Dec 2019. Digital Analytics Association. 2011. The Web Analyst’s Code of Ethics. https://www.digitalanalyticsassociation.org/codeofethics Accessed 18 November 2017. Dittrich David, and Erin Kenneally. 2012. The Menlo Report  – Ethical Principles Guiding Information and Communication Technology Research. United States, Department of Homeland Security. http://www.caida.org/publications/papers/2012/menlo_report_actual_formatted/menlo_report_actual_formatted.pdf. Accessed 12 Oct 2019. European Commission. 2016. Draft Code of Conduct on Privacy for Mobile Health Applications. https://ec.europa.eu/digital-­single-­market/en/news/code-­conduct-­privacy-­mhealth-­apps-­has-­ been-­finalised. Accessed 21 Nov 2019. European Data Protection Supervisor. 2015. Opinion 4/2015: Towards a new digital ethics. Available at: https://edps.europa.eu/sites/edp/files/publication/15-­09-­11_data_ethics_en.pdf. Accessed 07 Jan 2018.

4  Informed Consent in Digital Data Management

79

Faden, Ruth R., and Tom L.  Beauchamp. 1986. A History and Theory of Informed Consent. Oxford: Oxford University Press. Future of Privacy Forum. 2013. Mobile Location Analytics Code of Conduct.https://fpf.org/wp-­ content/uploads/10.22.13-­FINAL-­MLA-­Code.pdf. Accessed 07 Jan 2018. Global Alliance for Genomics and Health. 2014. Framework for Responsible Sharing of Genomic and Health-related Data. https://www.ga4gh.org/ga4ghtoolkit/regulatoryandethics/ framework-­for-­responsible-­sharing-­genomic-­and-­health-­related-­data/. Accessed 18 Nov 2019. Hayden, Erika C. 2012. Informed Consent: A broken contract. Nature 486: 312–314. https://doi. org/10.1038/486312a. Information Commissioner’s Office, United Kingdom. 2017. Guide to the General Data Protection Regulation (GDPR). https://ico.org.uk/media/for-­organisations/guide-­to-­the-­general-­data-­ protection-­regulation-­gdpr-­1-­0.pdf. Accessed 23 Mar 2018. Institute for Business Ethics. 2016. Business Ethics and Big Data. Business Ethics Briefing 52. https://www.ibe.org.uk/userassets/briefings/b52_bigdata.pdf. Accessed 19 Mar 2018. Institute of Medicine (US) Committee on Health Research and the Privacy of Health Information. 2009. The HIPAA Privacy Rule. In Beyond the HIPAA Privacy Rule: Enhancing Privacy, Improving Health Through Research, ed. S.J. Nass, L.A. Levit, and L.O. Gostin. Washington (DC): National Academies Press. https://www.ncbi.nlm.nih.gov/books/NBK9579/. Interactive Advertising Bureau. n.d. IAB Code of Conduct. https://www.iab.com/wp-­content/ uploads/2015/06/IAB_Code_of_Conduct_10282-­2.pdf. Accessed 5 Dec 2019. Kang, Cecilia, and Sheera Frenkel. 2018. Facebook Says Cambridge Analytica Harvested Data of Up to 87 Million Users. New York Times. 4 April. https://www.nytimes.com/2018/04/04/ technology/mark-­zuckerberg-­testify-­congress.html Accessed 8 Apr 2018. Kleinsman, John, and Buckley, Sue. 2015. Facebook Study: A Little Bit Unethical But Worth It? Bioethical Inquiry 12: 179–182. https://doi.org/10.1007/s11673-­015-­9621-­0. Kramer, Adam D.I., Jamie E. Guillory, and Jeffrey T. Hancock. 2014. Experimental Evidence of Massive-scale Emotional Contagion Through Social Networks. Proceedings of the National Academy of Sciences 1111 (24): 8788–8790. https://doi.org/10.1073/pnas.1320040111. Leetaru, Kalev. 2017. AI ‘Gaydar’ And How The Future Of AI Will Be Exempt From Ethical Review. Forbes. 16 September. https://www.forbes.com/sites/kalevleetaru/2017/09/16/ ai-­gaydar-­and-­how-­the-­future-­of-­ai-­will-­be-­exempt-­from-­ethical-­review/#704e7602c09a. Accessed 17 Jan 2018. Lewis, Kevin, Kaufman, Jason, Gonzalez, Marco, Wimmer, Andreas, and Christakis, Nicholas. 2008. Tastes, Ties and Time: A new social network dataset using Facebook.com. Social Networks. 30 (4): 330–342. https://doi.org/10.1016/j.socnet.2008.07.002. Mason, Neil C., and Onora O’Neill. 2017. Rethinking Informed Consent in Bioethics. New York: Cambridge University Press. Metcalf, Jacob. 2014. Ethics Codes: History, Context, and Challenges. Council for Big Data, Ethics, and Society. http://bdes.datasociety.net/council-­output/ethics-­codes-­history-­context-­ and-­challenges/. Accessed 13 Nov 2018. Metcalf, Jacob, and Kate Crawford. 2016. Where are human subjects in big data research? The emerging ethics divide. Big Data & Society 3 (1): 1–14. https://doi.org/10.1177/2053951716650211. Moor, James H. 1997. Towards a theory of privacy in the information age. ACM SIGCAS Computers and Society 27 (3): 27–32. https://doi.org/10.1145/270858.270866. National Academy of Sciences, National Academy of Engineering, and Institute of Medicine. 2009. Ensuring the Integrity, Accessibility, and Stewardship of Research Data in the Digital Age. Washington, D.C.: National Academy Press. http://www.onlineethics. org/?id=34249&preview=true. Accessed 7 Feb 2018. National Commission for the Protection of Human Subjects of Biomedical and Behavioral Research, United States, Department of Health, Education and Welfare. 1979. Belmont Report. https://www.hhs.gov/ohrp/regulations-­and-­policy/belmont-­report/. Accessed 12 Jan 2018. National Information Standards Organization. 2015. NISO Consensus Principles on Users’ Digital Privacy in Library, Publisher, and Software Provider Systems (NISO Privacy Principles.

80

E. Hildt and K. Laas

https://groups.niso.org/apps/group_public/download.php/16064/NISO%20Privacy%20 Principles.pdf. Accessed 09 Dec 2019. National Institute of Standards and Technology. 2010. Guide to Protecting the Confidentiality of Personally Identifiable Information. (PII), ES-1/ES-2) https://www.nist.gov/publications/guide-­protecting-­confidentiality-­personally-­identifiable-­information-­pii. Accessed 12 Dec 2019. Nuremberg Code. 1949. Trials of War Criminals before the Nuremberg Military Tribunals under Control Council Law No. 10″, Vol. 2, pp. 181-182. Washington, D.C.: U.S. Government Printing Office. https://history.nih.gov/research/downloads/nuremberg.pdf Accessed 7 Jan 2018. Organization for Economic Co-Operation and Development. 2012. The Protection of Children Online. https://www.oecd.org/sti/ieconomy/childrenonline_with_cover.pdf Accessed 12 March 2018. ———. 2013. The OECD Privacy Framework. http://www.oecd.org/sti/ieconomy/oecd_privacy_ framework.pdf Accessed 13 Jan 2018. Oxfam. 2015. Oxfam Responsible Program Data Policy. https://policy-­practice.oxfam.org.uk/publications/oxfam-­responsible-­program-­data-­policy-­575950 Accessed 20 January 2018. Pollach, Irene. 2005. A Typology of Communicative Strategies in Online Privacy Policies: Ethics, Power and Informed Consent. Journal of Business Ethics 62: 221–235. https://doi.org/10.1007/ s10551-­005-­7898-­3. PrivacySIG. n.d.. Code of Conduct. http://www.privacysig.org/code-­of-­conduct.html Accessed 16 February 2018. Rosenberg, Matthew and Sheera Frenkel. 2018. Facebook’s Role in Data Misuse Sets Off Storms on Two Continents. The New York Times 18 March. https://www.nytimes.com/2018/03/18/us/ cambridge-­analytica-­facebook-­privacy-­data.html?smid=tw-­share. Accessed 10 Apr 2018. Ruof, Mary C. 2004. Vulnerability, Vulnerable Populations, and Policy. Kennedy Institute of Ethics Journal. 14 (4): 411–425. https://doi.org/10.1353/ken.2004.0044. Solove, Daniel J. 2013. Introduction: Privacy self-management and the consent dilemma. Harvard Law Review 126: 1880–1903. Statt, Nick. 2017. Google Will Stop Scanning Your Gmail Messages to Sell Targeted Ads. The Verge. https://www.theverge.com/2017/6/23/15862492/google-­gmail-­advertising-­targeting-­ privacy-­cloud-­business Accessed 8 Jan 2018. Turilli, Matteo, and Luciano Floridi. 2009. The Ethics of Information Transparency. Ethics and Information Technology. 11 (2): 105–112. https://doi.org/10.1007/s10676-­009-­9187-­9. United Nations Educational, Scientific and Cultural Organization. 2003. International Declaration on Human Genetic Data. http://ethics.iit.edu/ecodes/node/5863. Accessed 10 Jan 2018. United States, Federal Trade Commission. 2019. Privacy and Data Security Update: 2019. https://www.ftc.gov/system/files/documents/reports/privacy-­data-­security-­update-­2019/2019-­ privacy-­data-­security-­report-­508.pdf United States, National Institutes of Health (NIH). 2004. National Institutes of Health (NIH)2003NOT-OD-03-032: Final NIH Statement on Sharing Research Data. NOT-OD-03-032: Final NIH Statement on Sharing Research Data. https://grants.nih.gov/grants/guide/notice-­ files/NOT-­OD-­03-­032.html Accessed 12 Jan 2018. United States, National Science Foundation (NSF). 2017. Nation Science Foundation (NSF) 2018 Grant Proposal Guide, Chapter II.C.2.j. https://www.nsf.gov/pubs/policydocs/pappg18_1/ index.jsp Accessed 6 Mar 2018. Van Dijck, Jose. 2014. Datafication, dataism and dataveillance: Big Data between scientific paradigm and ideology. Surveillance & Society 12 (2): 197. Vitak, Jessica, Katie Shilton, and Zahra Ashktorab. 2016. Beyond the Belmont principles: Ethical challenges, practices, and beliefs in the online data research community. In Proceedings of the 19th ACM Conference on Computer-Supported Cooperative Work & Social Computing. San Francisco, CA: Association of Computing Machinery. https://doi. org/10.1145/2818048.2820078.

4  Informed Consent in Digital Data Management

81

World Medical Association. 2013. WMA Declaration of Helsinki – Ethical Principles for Medical Research Involving Human Subjects. https://www.wma.net/policies-­post/wma-­declaration-­ of-­helsinki-­ethical-­principles-­for-­medical-­research-­involving-­human-­subjects/ Accessed 12 Jan 2018. Zimmer, Michael. 2010. But the Data is Already Public: on the ethics of research in Facebook. Ethics and Information Technology 12: 313–325. Elisabeth Hildt Professor of Philosophy and Director, Center for the Study of Ethics in the Professions, Illinois Institute of Technology, USA; [email protected]. Her research focus is on bioethics, ethics of technology, research ethics and Science and Technology Studies. Research interests include research ethics, philosophical and ethical aspects of neuroscience, and artificial intelligence.  

Kelly Laas Librarian and Ethics Instructor, Center for the Study of Ethics in the Professions, Illinois Institute of Technology, USA; [email protected]. Her research interests include the history and use of codes of ethics in professional fields, ethics education in STEM, research ethics, and integrating ethics into technical curricula.  

Chapter 5

Codes of Ethics and Research Integrity Stjepan Ljudevit Marušić and Ana Marušić

Abstract  Research integrity, research ethics, and research misconduct are increasingly focusing on discussions in academic, professional, and research communities. As formal documents sending a message to the professional community about standards guiding professional behavior, ethics codes seem to be a good place for addressing responsible conduct of research. However, research integrity concepts are not often addressed in ethics codes published in English. It is unclear whether ethics codes will or should become the primary safeguard against research misconduct and what form this would take in the developing landscape of research integrity. One way to go is for the ethics/integrity codes to become binding, strictly defined rules for members of the associations or institutions authoring the codes. Alternatively, ethics codes may serve as an aspirational promotion of responsible conduct of research, with the assumption that the research community has other self-correcting mechanisms to preserve the integrity of the research process. Whatever the future may hold, codes of ethics will remain an essential vehicle of the nascent attempts to safeguard against research misconduct and a good indicator of how the research community will address and resolve the emerging issues in responsible conduct of research. Keywords  Ethics codes · Professional organizations · Research integrity · Research misconduct

S. L. Marušić Rogor, Zagreb, Croatia A. Marušić (*) University of Split, School of Medicine, Split, Croatia e-mail: [email protected] © Springer Nature Switzerland AG 2022 K. Laas et al. (eds.), Codes of Ethics and Ethical Guidelines, The International Library of Ethics, Law and Technology 23, https://doi.org/10.1007/978-3-030-86201-5_5

83

84

S. L. Marušić and A. Marušić

5.1  Introduction Research is a part of any profession and is thus often addressed in professional ethics codes (Bateman 2012). This essay will explore how professional ethics codes define and communicate concepts of research integrity to their membership. We will first look into the definition of research integrity and its relation to research ethics. We will then present our research findings on how concepts of research integrity are addressed in professional codes of ethics. Finally, we will present our thoughts on current and future research integrity issues in professional ethics codes.

5.2  Research Integrity vs Research Ethics Research integrity is a term that came into existence only in the 1990’s, after major research scandals in the USA (Steneck 2006), although the regulations, procedures, and policies around research misconduct were already present in the 1980’s in Nordic countries (Bosch 2010). Today, research integrity has become an important aspect of the research endeavor. Policymakers continue creating policies, structures, and procedures for responsible conduct in research in many countries, such as those in Europe (ENRIO 2019), USA (National Academies of Sciences, Engineering, and Medicine 2017), Australia (National Health and Medical Research Council 2018), Japan (Ministry of Education, Culture, Sports, Science, and Technology 2014), to name a few examples. The research community is also active in addressing responsible conduct of research, not only as a research topic. This ranges from informal national networks or researchers and academics, such as the Netherlands Research Integrity Network (NRIN 2019), to national associations of stakeholders in research and higher education, such as Ireland’s National Research Integrity Forum (IUA 2019), or international associations of universities, such as the League of European Research Universities (LERU 2020). Research integrity had been considered to be related or identical to research ethics, but over time, with the formalization of research integrity/scientific misconduct policies, the two definitions diverged significantly. In 2006, Nicholas Steneck provided separate definitions of research integrity and research ethics used by the US Office of Research Integrity (ORI) (Steneck 2006). His definitions started from the premise that research ethics looks at research behavior from the standpoint of moral principles and that research integrity views research behavior from the standpoint of professional standards. Research integrity is thus defined as: “the quality of possessing and steadfastly adhering to high moral principles and professional standards, as outlined by professional organizations, research institutions and, when relevant, the government and public” and research ethics is “the critical study of the moral problems associated with or that arise in the course of pursuing research”. In Europe, the most recent definitions were provided by the project ENERI  – European Network of Research Ethics and Research Integrity (ENERI 2019a):

5  Codes of Ethics and Research Integrity

85

Research ethics addresses the application of ethical principles or values to the various issues and fields of research. This includes ethical aspects of the design and conduct of research, the way human participants or animals within research projects are treated, whether research results may be misused for criminal purposes, and it refers also to aspects of scientific misconduct.

Research ethics is considered to be a more generic concept than research integrity: Research integrity is recognized as the attitude and habit of the researchers to conduct research according to appropriate ethical, legal, and professional frameworks, obligations and standards.

The two fields “combine general ethical reflections, ethics, and law as academic disciplines addressing research activities, moral attitudes of researchers, normative policies of stakeholders like sponsors or funding organizations, and various ethical expectations of the civil society” (ENERI 2019b). Research integrity is a new but very significant concept for the research enterprise. Over the years, it has also become a research topic and developed into a small but growing research field, and, since 2007, research on research integrity is regularly presented and discussed at the World Conferences on Research Integrity (Steneck et al. 2018). The growing importance of research integrity in the scientific community is illustrated by the number of publications addressing this topic (Fig. 5.1). In PubMed, the largest biomedical bibliographical database (over 30 million records in November 2020), the first appearance of the term “research integrity” is in 1986, with the paper discussing the importance of research integrity in community-based drug prevention programming (Pentz et  al. 1986). The next indexed item is a 1989 news piece on the USA’s political ramification of research misconduct cases (Marwick 1989). From 1991, the annual number of publications on research integrity ranged from 35 to 55, with a noticeable increase from 2015 (Fig. 5.1). The medical field seems to have embraced the topic of research integrity before other fields, as illustrated by the data from the Web of Science Core Collection (WoS) database, which indexes journals from all disciplines. In WoS, the increase in the number of publications had been slower and reached over 40 publications annually only after 2010. In parallel with the increase in research publications on this topic, research integrity has been addressed at the policy level, with many countries introducing specific regulations, structures, and processes related to research misconduct and research integrity (Godecharle et al. 2014). Godecharle et al. (2014) investigated value-based vs. norm-based approaches in guidance documents concerning research integrity in the European Economic Area. They assessed 49 guidelines published by different bodies, from national research organizations to ministries to national laws, and showed that policy documents from countries with a more legalistic approach tended to have more of a norm-based approach to research integrity, with clear and applied rules that should not be broken. On the other hand, other countries had a more value-based approach, which addressed principles of behavior and role models.

86

S. L. Marušić and A. Marušić

Fig. 5.1  Number of publications related to research integrity in bibliographical databases until 2019. (A) PubMed; total number of retrieved citations = 1742, starting from 1986 (search strategy: “research integrity” [All Fields]). (B) Web of Science Core Collection; number of retrieved citations = 1133, starting from 1976 (search strategy: “research integrity” [Topic]). Date of search: 29 November 2020

Recent evidence also shows that researchers and policymakers have diverged in their perception of research integrity (Kaiser 2014; Shaw 2019; Shaw and Satalkar 2018). It seems that researchers see research integrity more as a virtue and that policymakers take a more normative view in policy documents. Horbach and Halffman (2017) recently studied the textual context for research integrity in scientific publications vs. policy documents vs. newspaper articles. Using a co-word analysis, they looked at which terms were used together with “research integrity.” In policy documents, research integrity was most commonly associated with terms related to

5  Codes of Ethics and Research Integrity

87

“repression” and “misconduct,” whereas in scientific texts, “research integrity” was related to terms designating values and promotion. The authors also analyzed the evolution of the use of the term “research integrity” from the “pre-research integrity” period (1987–1990) until 2015 to show that older policy documents and scientific literature were closer to how the term was used in comparison with newer documents. In older documents, the words that were used together with “research integrity” were “trust,” “dignity,” “responsibility,” and “respect,” which changed over time to words related to sanctioning misconduct (“sanction,” “correction,” “failure,” and “concealment”). This discourse divergence of the research community and policymakers (Horbach and Halffman 2017) may harm how researchers accept policy documents if they cannot internalize the norms being established and make them a part of their work and community. Researchers are tempted to regard policies as a ritual that needs to be completed without real compliance (Davies 2019). Furthermore, research integrity is often perceived as the absence of research misconduct instead of the presence of values and principles guiding responsible conduct of research (ALLEA 2017). A good illustration that research integrity is currently viewed as the absence of research misconduct is the fact that “research integrity” does not exist in PubMed’s Medical Subject Heading (MeSH  – the hierarchical dictionary for indexing in PubMed, last updated in December 2018) but matches to the MeSH term “scientific misconduct” (“Intentional falsification of scientific data by presentation of fraudulent or incomplete or uncorroborated findings as scientific fact.”), introduced in 1990 (NLM 2019).

5.3  Research Integrity and Codes of Ethics Although the body of knowledge on codes of ethics is substantial, especially in business (Bateman 2012), there is little evidence on how research integrity is addressed in these codes. Here we will present the results of our study (Komić et al. 2015), in which we assessed the prevalence of research integrity terms in professional ethics codes in English, collected in the Ethics Codes Collection database from The Center for the Study of Ethics in the Professions at the Illinois Institute of Technology (2019). We first developed a list of terms related to research integrity in consultations with experts in research integrity and research ethics, including the participants of the third World Conference on Research Integrity in Montreal in 2013. The final list contained 27 terms grouped around three themes related to research integrity: responsible conduct of research, questionable research practices, and research misconduct (Fig. 5.2). The grouping of the terms was based on the definitions used to describe research integrity. “Responsible conduct of research” is defined as the opposite (positive) side of the spectrum of research behaviors from research misconduct, which usually includes fabrication, falsification, and plagiarism (FFP, often termed “research fraud”) (Steneck 2006). The category of “questionable research practices” includes “…actions that violate traditional values of the

88

S. L. Marušić and A. Marušić Responsible conduct of research Responsible conduct of research Authorship Contributorship Credit Ethics Honesty Integrity Secondary publication

Questionable research practices Questionable research practices Bias Conflict of interest Competing interest Dual interest/relationship Duplicate publication Inaccuracy Misrepresentation Redundant publication Repetitive publication Salami publication

Research misconduct Falsification Fabrication Plagiarism Misconduct Malpractice Fraud Manipulation Dishonesty

Fig. 5.2  Research integrity terms used to search the Ethics Codes Collection in 2015*. (From Komić et al. 2015)

research enterprise and that may be detrimental to the research process” (Committee on Science, Engineering and Public Policy 1992). These behaviors are considered to be less serious than the research misconduct (falsification, fabrication, and plagiarism) or that there was not sufficient consensus in the research community about the seriousness of such practices. The terms from Fig. 5.2 were then used to create search strategies, which included possible word variations and overlapping terms and synonyms to increase the sensitivity of the search to find statements related to research ethics and integrity. We analyzed all identified codes of ethics without time limitations. In the case of code revisions, we used the latest version of the code. We also separately analyzed codes from research-related professions, which were identified by research- or academia-­ related terms in the organization’s name. We analyzed codes from 795 professional organizations with a total of 652 unique statements identified by our search strategy. Only 182 organizations (23%) had codes with at least one research integrity term addressed. Most of these organizations were national societies or associations. The number of terms (i.e. research integrity concepts) addressed by an organization ranged from 1 to 20, with a median of 2 terms. Most of the statements addressed ethics in general (113 statements from 63 organizations). Whereas plagiarism was mentioned in 72 statements from 59 organizations, other forms of serious misconduct – falsification and fabrication – were addressed by 30 and 37 statements from 29 and 26 organizations, respectively. Furthermore, questionable research practices, especially questionable publication practices, were addressed by only a few organizations. For example, the term “salami publication,” denoting the practice of publishing the smallest piece of research in order to increase the number of publications (Wawer 2019) was not found in any code in our study. The subsample of organizations that we considered were directly involved in research had a significantly higher prevalence of statements addressing research integrity terms than the whole sample.

5  Codes of Ethics and Research Integrity

89

We also analyzed the language of the statements regarding research integrity concepts. We were particularly interested in whether the language of a statement was predominantly normative or aspirational. According to the methodology described by Rose (1998), aspirational language was indicated by the use of the phrases such as “strive to,” “attempt to,” or “seek,” whereas normative language contained prescribed minimal standards (such as “… members shall not commit scientific misconduct …”). Such prescriptive language was used by 62% of all the statements, and there was not much difference if these statements addressed research misconduct or responsible conduct of research. There were also no significant differences among different research disciplines, analyzed according to the 28 professional categories used by the Ethics Codes Collection. None of the professional categories addressed all research integrity concepts. The most commonly addressed research integrity concepts were: “inaccuracy” addressed by 23 professional disciplines, “credit” and “integrity” by 21 each, and “plagiarism,” “author,” and “contributor” by 19 disciplines. Overall, the least addressed research integrity topics were “repetitive publication,” “secondary publication,” and “questionable research practices.” The two professional categories with the largest prevalence of statements related to research integrity were “Health Care” and “Science.” In the “Health Care” category, 47 out of 100 organizations had 280 statements related to research integrity, which addressed 89% of the research integrity concepts. This was followed by the “Science” category, where 46 out of 75 organizations had 478 statements that addressed 85% of the research integrity concepts. The language used to describe research integrity concepts was predominantly normative, prescribing the minimum standard expected from professional behavior. These findings were similar to other studies involving different sets of codes and research integrity concepts. A 1998 study of 90 codes from scientific professional organizations funded by the National Science Foundation in the USA showed that these codes used normative language to define authorship (Rose 1998). We also found that professional organizations use normative language to define authorship more often than scientific journals (75% vs. 18%, respectively) (Bošnjak and Marušić 2012). What did we learn from this study? The most important finding was that professional organizations had not put research integrity as their focus. Those that did, however, addressed a small number of research integrity issues, i.e., on average only 2 to 3 out of 27 topics that were included in our search. It is possible that not all professional organizations aim to address research in their ethics codes. Professional organizations that we judged to be directly involved in research had three times more statements in the ethics codes that addressed research integrity concepts, but this was still below the desired level of awareness and management of responsible research expected from research organizations.

90

S. L. Marušić and A. Marušić

5.4  Changes in Research Integrity Concepts Our study of research integrity terminology in professional codes of ethics was performed in 2015 and may not reflect the current situation, especially taking into account the increased awareness and intensive growth in research integrity research (Fig. 5.1). In the meantime, new concepts in research integrity have been developed. For example, the revised policy document of the National Academies of Science (2017) changed the term “questionable research practices” into “detrimental research practices” and clarified that there is more evidence about the damage these practices cause to the research process. The practices specifically named as detrimental to research are those related to the misuse of authorship, improper management of research results, problems in research supervision, misleading statistical analysis, inadequate institutional attention to research integrity, and misuse of the position of journal editors and peer reviewers. Open Science has also brought transparency of the research process into the focus, asking for accessibility to knowledge to all levels of society (Vicente-Saez and Martinez-Fuentes 2018). It includes open data (data sharing), open sources, open methodology, open peer review, open access, and open educational resources. Furthermore, increased legislative focus on the protection of personal data, particularly sensitive personal data such as health data, has introduced new terminology, such as data controlling, data ownership, and data processing. The examples of these data protection and privacy laws include the Health Insurance Portability and Accountability Act (HIPAA) in the USA (DHHS 1996), the General Data Protection Regulation in the EU (European Union 2016), and the Personal Information Protection and Electronic Documents Act in Canada (Office of the Privacy Commissioner of Canada 2019). In the new edition of the Ethics Codes Collection from the Center for the Study of Ethics in the Professions at Illinois Institute of Technology, available at http:// ethicscodescollection.org/, 28 (35%) of 79 available search topics are related to research integrity (as well as to professionalism in general) (Fig. 5.3). However, the term “research integrity” is not included. There is “Handling of misconduct allegations” but not the positive aspect of responsible conduct of research. The terms “responsible conduct of research” (but “responsible innovation” is a topic in the Collection) or “questionable/detrimental research practice” are also missing. There is “Plagiarism” but not the other two forms of scientific fraud – fabrication and falsification. When we searched the new Ethics Code Collection in December 2019 for the specific term “research integrity”, we retrieved 531 documents mentioned that term out of more than 2500 codes in the Collection. Out of those, 390 were unique records (duplicates and older versions of documents excluded). These documents dated from 1922 to 2019, with 87 (22%) of them published since 2015, when our first study of the Collection was conducted. This means that the concept of research integrity (or “scientific integrity” as used in some documents) in making its way into professional codes of ethics.

5  Codes of Ethics and Research Integrity

91

Accountability

Mentor and trainee

Authorship

Multiple relationships

Bias

Negligence

Big data

Open access

Collaboration

Peer review

Collegiality

Plagiarism

Community and participatory research

Privacy and surveillance

Confidentiality

Public trust

Conflict of interest

Public and community engagement

Data management

Publication ethics

Honesty

Reproducibility

Handling of misconduct allegations

Social responsibility

Informed consent

Transparency

Intellectual property and patents

Whistle-blowing

Fig. 5.3  Search terms related to research integrity in the Ethics Codes Collection. (Source: Ethics Codes Collection from the Center for the Study of Ethics in the Professions at Illinois Institute of Technology, available at http://ethicscodescollection.org/)

5.5  Recommendations to Professional Organizations As moral agents in a self-organized community (Sama and Shoaf 2008), professional organizations should follow the developments and the accumulated body of evidence about research integrity and include them in their ethics codes. We encourage the use of the checklist proposed by Anderson and Shaw (2011) as a framework for assessing existing codes on research integrity in planning revisions or constructing a new code. The proposed framework consists of 10 dimensions (Fig. 5.4). However, it would be desirable to achieve a more extensive consultation between different stakeholders about how to approach research integrity  – as a virtue of responsible conduct of research to be promoted or as a deterrent for research misconduct. Currently, organizations and policymakers approach the issues surrounding research integrity from different and diverging perspectives in comparison with the researchers themselves. Organizations and policymakers have a more legal and prescriptive approach, while researchers’ discourse is more about virtues and responsibility. This may lead to tensions and non-acceptance of codes of ethics or just formal subscription to them. It is probably unreasonable to expect full harmonization of research integrity principles across different countries, given the historical, cultural, legal, and socio-economic differences in establishing research integrity structures and processes (Bonn et al. 2017; Godecharle et al. 2014). However, it is important to establish a dialogue about research integrity to share experiences and be prepared for new challenges. A good example of such a collaborative initiative at the European level is the European Network of Research Integrity Offices (ENRIO 2019). Regardless of the approach to research integrity, the language used to describe the expectations and aspirations related to research integrity is essential. There is

92

S. L. Marušić and A. Marušić Code dimension

Explanation

Nature

Legal documents or guidelines expressing understanding and consensus on good practices

Purpose

Regulatory codes, aspirational codes, educational codes, normative codes

Impetus or reason for development

What leads to the writing of the code

Subjects

The people who must observe the code

Authors

Who wrote the code

Grounding in ethical principles and theory

Pragmatic or theoretical basis for the code

Scope and content

What is addressed in the code (e.g. covering essential points or explaining principles and their application in detail)

Format

Succinct, bullet-point prescriptions vs detailed explanation of the code, including definitions, principles, examples of good or best practice, prescriptions and guidelines

Language

Prescriptive vs aspirational

Quality

Clear and well written, easy to use

Fig. 5.4  Dimensions for characterizing codes on research integrity. (Code dimension according to the results presented in Anderson and Shaw (2011))

evidence that specific language in business codes of ethics can communicate an authoritarian message and a sense of over-obligation (Farrell and Farrell 1998). The examples of this specific language include the use of passive voice and derivation of nous from other types of words, such as verbs or adjectives (nominalization or nouning, e.g., adding suffix “ity” as in “reproducibility”). Grammatical metaphor (clauses in which one type of process is represented in the grammar of another) is another way to make the language more complicated, as in the example of changing a logical and experiential sentence “Because technology is getting better, people can write business programs faster” into “Advances in technology are speeding up the writing of business programs.” (Devrim 2015). Grammatical modality is also common in ethics codes, especially the deontic modality that indicates the necessity to act (out of ability, permission of duty). We see this is in the use of words (modals) indicating what can be done (ability), what may be done (permission), what should be done (request) and what must be done (duty). Such language of the ethics codes may reduce open decision-making and deter professionals from using the ethics code. Evidence from different professions supports this: medical professionals rarely cite traditional professional ethics codes (such as the Hippocratic Oath) as guidance in their work (Antiel et al. 2011) and university researchers report problems in using ethical guidelines and codes of conduct (Giorgini et al. 2015). A study of corporate codes of ethics showed that the quality of a code depends on several

5  Codes of Ethics and Research Integrity

93

factors: how all stakeholders are involved, presentation, language style, aids for better comprehension, and discussing risks (Erwin 2011). Thus, only the codes of conduct that are written in a clear language and are easily accessible will be used by different users, from students to professional researchers to administrators. The availability of digital tools, such as deep language analysis (Karačić et al. 2019), could be used in future research to better understand the effect of language on how codes of ethics are perceived and used in practice. Further research into the language of research integrity policies and ethics codes, in general, would be especially important to address the current difference in the language and understanding of research integrity by researchers and policymakers (Horbach and Halffman 2017). Public availability of ethics codes is another important factor addressed in the study of corporate ethics codes (Erwin 2011) and suggested by us in our analysis of research integrity concepts in ethics codes (Komić et al. 2015) as well as in a qualitative study of researchers’ perceptions of codes of conduct (Giogini et al. 2015). It is important that policies are publicly available so that all stakeholders have clear expectations of the research process. The transparency of research integrity codes is increasing over time, as shown by the study of Bonn et al. (2017), who analyzed guidance on research ethics and research integrity from 18 universities in 10 European countries that were members of the League of European Research Universities (LERU). Comparing their findings in 2014 and 2016, the researchers determined that the availability of guidance documents on the institutions’ web sites increased and that referrals to international standards and research integrity guidance also increased. Professional organizations should also address research ethics and research integrity issues in multidisciplinary research collaborations. Different disciplines may have different expectations from research practices, such as authorship. For example, while authorship requirements in biomedicine include an intellectual contribution to the concrete research presented in the study and writing of a manuscript, researchers in high energy particle physics consider any contribution as authorship, resulting in articles with more than three thousand authors (Marušić et al. 2011). There are already some efforts in the direction of creating ethic codes for collaboration between different disciplines, like the unified code of ethics for health professions, published by the US Institute of Medicine and created by representatives from 18 different health research disciplines, such as nursing, medicine and psychology (Wynia 2014). There are also recent efforts to develop tools for promoting research integrity in organizations that perform or fund research so that they can create an environment for responsible research (Mejlgaard et al. 2020).

5.6  Conclusion Ethics codes for researchers and professionals will continue to set the standards and provide guidance to the professional community only if they reflect the changing landscape of science. Among other things, this means that ethics codes should

94

S. L. Marušić and A. Marušić

address research integrity concepts. Professional organizations are moral participants in a self-regulated community and responsible research may be an essential part of the profession. It is also important that research integrity is defined and viewed as a virtue and a positive standard and not only as absence of misconduct. The language of the codes in relation to research integrity is significant as well. Institutions and organizations creating ethics codes should think carefully about the language that will be used in an ethics code. We have sufficient evidence that clear and emotionally engaging language is important to help all stakeholders embrace and internalize research integrity in everyday practice. Finally, the codes need to be revised and updated based on evidence acquired in forthcoming studies and new demands from developing research areas.

References All European Academies. 2017. The European Code of Conduct for Research Integrity. Revised edition. Berlin: ALLEA. Anderson, Melissa S., and Marta A. Shaw. 2011. A Framework for Examining Codes of Conduct on Research Integrity. In Promoting Research Integrity In A Global Environment, ed. T. Mayer and N. Steneck, 133–147. Singapore: World Scientific Publishing Co. Antiel, Ryan M., Farr A.  Curlin, C.  Christopher Hook, and Jon C.  Tilburt. 2011. The Impact of Medical School Oaths and Other Professional Codes of Ethics: Results of a national physician survey. Archives of Internal Medicine 171: 469–470. https://doi.org/10.1001/ archinternmed.2011.47. Bateman, Connie R. 2012. Professional Ethical Standards: The journey toward effective codes of ethics. In Work and quality of life. Ethical practices in organizations, ed. Nora P. Reilly, M. Joseph Sirgy, and C. Allen Gorman, 21–34. Amsterdam: Springer. Bonn, Noémie Aubert, Simon Godecharle, and Kris Dierickx. 2017. European Universities’ Guidance on Research Integrity and Misconduct: accessibility, approaches, and content. Journal of Empirical Research on Human Research Ethics 12: 33–44. https://doi. org/10.1177/1556264616688980. Bosch, Xavier. 2010. Safeguarding Good Scientific Practice in Europe. EMBO Reports 11: 252–257. https://doi.org/10.1038/embor.2010.32. Bošnjak, Lana, and Ana Marušić. 2012. Prescribed Practices of Authorship: review of codes of ethics from professional bodies and journal guidelines across disciplines. Scientometrics 93: 751–763. https://doi.org/10.1007/s11192-­012-­0773-­y. Committee on Science, Engineering and Public Policy (US), Panel on Scientific Responsibility and the Conduct of Research. 1992. Responsible science: Ensuring the integrity of the research process. Washington, DC: The National Academy Press. Davies, Sarah R. 2019. An Ethics of the System: talking to scientists about research integrity. Science Engineering and Ethics 25: 1235–1253. https://doi.org/10.1007/s11948-­018-­0064-­y. Devrim, Devo Y. 2015. Grammatical Metaphor: What do we mean? What exactly are we researching? Functional Linguist 2: 3. https://doi.org/10.1186/s40554-­015-­0016-­7. ENERI. 2019a. What is Research Ethics? http://eneri.eu/what-­is-­research-­ethics/. Accessed 3 Feb 2020. ———. 2019b. ENERI Manual. Research integrity and ethics. http://eneri.eu/e-­manual/. Accessed 3 Feb 2020. ENRIO  – European Network of Research Integrity Offices. 2019. http://www.enrio.eu/about-­ enrio/. Accessed 10 Feb 2020. Erwin, Patrick M. 2011. Corporate Codes of Conduct: The effects of code content and quality on ethical performance. Journal of Business Ethics 99: 535–548. https://doi.org/10.1007/ s10551-­010-­0667-­y.

5  Codes of Ethics and Research Integrity

95

Ethics Codes Collection. Center for the Study of Ethics in the Professions, Illinois Institute of Technology. http://ethicscodescollection.org. Accessed 6 January 2020. EU  – European Union. General Data Protection Regulation. 2016. https://eur-­lex.europa.eu/ legal-­content/EN/TXT/HTML/?uri=CELEX:02016R0679-­20160504&from=EN. Accessed 8 December 2019. Farrell, Helen, and Brian J. Farrell. 1998. The Language of Business Codes of Ethics: implications of knowledge and power. Journal of Business Ethics 17: 587–601. https://doi.org/10.102 3/A:1005749026983. Giorgini, Vincent, Jensen T. Mecca, Carter Gibson, Kelsey Medeiros, Michael D. Mumford, Shane Connelly, and Lynn D.  Devenport. 2015. Researcher Perceptions of Ethical Guidelines and Codes of Conduct. Accountability in Research 22: 123–138. https://doi.org/10.1080/0898962 1.2014.955607. Godecharle, Simon, Benoit Nemery, and Kris Dierickx. 2014. Heterogeneity in European Research Integrity Guidance: relying on values or norms? Journal of Empirical Research on Human Research Ethics 9: 79–90. https://doi.org/10.1177/1556264614540594. Horbach, Serge P.J.M., and W.  Halffman. 2017. Promoting Virtue or Punishing Fraud: mapping contrasts in the language of ‘scientific integrity’. Science and Engineering Ethics 23: 1461–1485. https://doi.org/10.1007/s11948-­016-­9858-­y. IUA – Irish Universities Association. 2019. National Research Integrity Forum. https://www.iua. ie/for-­researchers/research-­integrity/#. Accessed 8 Dec 2019. Kaiser, Matthias. 2014. The Integrity of Science - lost in translation? Best Practices in Research Clinical Gastroenterology 28: 339–347. https://doi.org/10.1016/j.bpg.2014.03.003. Karačić, Jasna, Pierpaolo Dondio, Ivan Buljan, Darko Hren, and Ana Marušić. 2019. Languages for Different Health Information Readers: multitrait-multimethod content analysis of Cochrane systematic reviews textual summary formats. BMC Medical Research Methodology 19: 75. https://doi.org/10.1186/s12874-­019-­0716-­x. Komić, Dubravka, Stjepan L. Marušić, and Ana Marušić. 2015. Research Integrity and Research Ethics in Professional Codes of Ethics: survey of terminology used by professional organizations across research disciplines. PLoS One 10: e0133662. https://doi.org/10.1371/journal. pone.0133662. LERU  – League of European Research Universities. Towards a Research Integrity Culture at Universities: From Recommendations to Implementation. Advice Paper No. 26  – January 2020. https://www.leru.org/files/Towards-­a-­Research-­Integrity-­Culture-­at-­Universities-­full-­ paper.pdf. Accessed: 29 November 2020. Marušić, Ana, Lana Bošnjak, and Ana Jerončić. 2011. A Systematic Review of Research on the Meaning, Ethics and Practices of Authorship Across Scholarly Disciplines. PLoS One 6: e23477. https://doi.org/10.1371/journal.pone.0023477. Marwick, Charles. 1989. Congress Puts Pressure on Scientists to Deal with Difficult Questions of Research Integrity. JAMA 262: 734–735. Mejlgaard, Niels, Lex M.  Bouter, George Gaskell, Panagiotis Kavouras, Nick Allum, Anna-­ Kathrine Bendtsen, Costas A.  Charitidis, Nik Claesen, Kris Dierickx, Anna Domaradzka, Andrea Reyes Elizondo, Nicole Foeger, Maura Hiney, Wolfgang Kaltenbrunner, Krishma Labib, Ana Marušić, Mads P.  Sørensen, Tine Ravn, Rea Ščepanović, Joeri K.  Tijdink, and Giuseppe A. Veltri. 2020. Research integrity: nine ways to move from talk to walk. Nature 586: 358–360. https://doi.org/10.1038/d41586-­020-­02847-­8. Ministry of Education, Culture, Sports, Science and Technology. 2014. New Guidelines for Responding to Misconduct in Research. http://www.mext.go.jp/en/news/topics/detail/__icsFiles/afieldfile/2015/07/14/1360017_2.pdf. Accessed 6 Dec 2019. National Academies of Sciences, Engineering, and Medicine. 2017. Fostering Integrity in Research. Washington, DC: The National Academies Press. National Health and Medical Research Council, Australian Research Council and Universities Australia. 2018. Australian Code for the Responsible Conduct of Research. Canberra: Commonwealth of Australia. NLM – National Library of Medicine. Medical Subject Headings. https://www.ncbi.nlm.nih.gov/ mesh/?term=research+integrity. Accessed 6 Dec 2019. NRIN – Netherlands Research Integrity Network. https://www.nrin.nl/. Accessed 6 Dec 2019.

96

S. L. Marušić and A. Marušić

Office of the Privacy Commissioner of Canada. 2019. The Personal Information Protection and Electronic Act (PIPEDA). https://www.priv.gc.ca/en/privacy-­topics/privacy-­laws-­in-­canada/ the-­personal-­information-­protection-­and-­electronic-­documents-­act-­pipeda. Accessed 8 Dec 2019. Pentz, Mary Ann, Calvin Cormack, Brian Flay, William B. Hansen, and C. Anderson Johnson. 1986. Balancing Program and Research Integrity in Community Drug Abuse Prevention: project STAR approach. Journal of School Health 56: 389–393. https://doi.org/10.1111/j.1746­1561.1986.tb05779.x. Rose, M. 1998. What Professionals Expect: Scientific professional organizations’ statements regarding authorship. Science Editing & Information Management, Proceedings of the Second International AESE/CBE/EASE Joint Meeting, Sixth International Conference on Geoscience Information and Thirty-second Annual Meeting, Association of Earth Science Editors; 1998 Sep 10-22 Washington DC., ed. Manson, C.J., and Geoscience Information Society, 15–22. Alexandria: Geoscience Information Society. Sama, Linda M., and Victoria Shoaf. 2008. Ethical Leadership for the Professions: Fostering a moral community. Journal of Business Ethics 78: 39–46. https://doi.org/10.1007/s10551-­006-­9309-­9. Shaw, David. 2019. The Quest for Clarity in Research Integrity: A conceptual schema. Science and Engineering Ethics 25: 1085–1093. https://doi.org/10.1007/s11948-­018-­0052-­2. Shaw, David, and Priya Satalkar. 2018. Researchers’ Interpretations of Research Integrity: A qualitative study. Accountability in Research 25: 79–93. https://doi.org/10.1080/08989621.201 7.1413940. Steneck, Nicholas H. 2006. Fostering Integrity in Research: definitions, current knowledge, and future directions. Science and Engineering Ethics 12: 53–74. https://doi.org/10.1007/ pl00022268. Steneck, Nicholas H., Tony Mayer, Melissa S. Anderson, and Sabine Kleinert. 2018. The Origin, Objectives, and Evolution of the World Conferences on Research Integrity. In Scientific Integrity and Ethics in the Geosciences, Special Publications 73, ed. Linda C. Gundersen, 1st ed., 3–14. New York: Wiley. US DHHS – Department of Health & Human Services. 1996. Summary of the HIPAA Privacy Rule. https://www.hhs.gov/hipaa/for-­professionals/privacy/laws-­regulations/index.html. Accessed 8 Dec 2019. Vicente-Saez, Ruben, and Clara Martinez-Fuentes. 2018. Open Science Now: A systematic literature review for an integrated definition. Journal of Business Research 88: 428–436. https://doi. org/10.1016/j.jbusres.2017.12.043. Wawer, Jaroslaw. 2019. How to Stop Salami Science – promotion of healthy trends in publishing behaviour. Accountability in Research 5: 33–48. https://doi.org/10.1080/08989621.201 8.1556099. Wynia, Matthew K., Sandeep P. Kishore, and Cynthia D. Belar. 2014. A Unified Code of Ethics for Health Professionals. Insights from an IOM workshop. JAMA 311: 799–800. https://doi. org/10.1001/jama.2014.504. Stjepan Ljudevit Marušić  MA in Philosophy and English from the University of Zagreb; postgraduate student at the University of Split, Croatia; [email protected]. Freelance translation and editing consultant with a focus on medical journals (Rogor Editing, Zagreb, Croatia). Interests include philosophy of science, open society, and freedom of speech, as well as research ethics which is the topic of his doctoral thesis. Ana Marušić  MD, PhD, Professor and Chair, Department of Research in Biomedicine and Health, University of Split School of Medicine, Split, Croatia; [email protected]. Teaches anatomy of the human body and the anatomy of the scientific article. Research focus is on research integrity and publication ethics, including the transparency of clinical trials.

Part II

Ethics Codes, Emerging Technologies, and International Viewpoints

What makes an effective code of ethics? How can ethics codes and guidelines better influence the daily professional decision-making of practitioners rather than be seen just as window-dressing or ethics-washing? Can we peer behind a code to get at the motivations of its authors? How can governments work effectively with scientific and professional organizations to ensure that guidelines are effective? These are all questions that the following chapters attempt to address by scrutinizing the codes and guidelines of different professions. Some chapters delve into the biomedical professions to explore how guidelines can succeed and fall short in reaching the audiences they are aimed at and suggest key strategies for improving their impact. Other chapters explore how normative documents address ethical challenges raised by emerging technologies like artificial intelligence (AI), nanotechnologies, and advances in biomedical engineering and provide insight into the development of specific codes and guidelines to address a changing global technology landscape. In clinical bioethics, ethical guidelines and professional codes have long been a vital tool for orienting and supporting the decisions doctors make on their patient’s behalf. Wetterauer, Schürmann, and Reiter-Theil (Chap. 6) explore how Swiss regulation and ethical guidelines—along with international codes and guidelines—help shape the use of cardio-pulmonary resuscitation (CPR) in clinical care and the development of a resuscitation guideline for a Swiss University Hospital. This case study highlights how ethical guidelines in medicine often do not reach their target audiences for several reasons and offers some key guidance for crafting more effective guidelines that are useful in clinical and other practical settings. The story behind the development or revision of an ethics code is often as illuminating about the core workings of a profession as the wording of the code itself. One of the most enjoyable duties of developing and maintaining the Ethics Codes Collection at the Illinois Institute of Technology is helping members of ethics committees from professional associations and companies improve their code of ethics. In some cases, this is a question of finding existing, related codes and using these as a starting point. In other cases, teams of scholars, practitioners, and ethicists come together to craft a series of principles and guidelines that can handle new

98

II  Ethics Codes, Emerging Technologies, and International Viewpoints

developments in their field. This is the case with IEEE’s effort to modify its code of ethics in 2017. Greg Adamson’s and Joe Herkert’s chapter (Chap. 8) gives a detailed history of this effort to strengthen IEEE members’ responsibility for the “safety, health and welfare of the public” that has long been part of engineering ethics codes while also paying closer attention to issues of ethical design, sustainable development, the growing use of intelligent systems, and relationships between technology and society. Many scholars have held up IEEE’s Code of Ethics and their publication “Ethically Aligned Design” in 2016 as excellent examples of how professional codes and guidelines can provide useful insights into technologists working in the field of artificial intelligence and machine learning (IEEE 2016 ; Floridi et al. 2018; Chatila and Havens 2019). This document has also gone through several revisions and rounds of public comment, and a second version was published in 2019. However, “Ethically Aligned Design” is only one of well over 100 different normative documents seeking to shape the ethical use of AI. What can an analysis of these other documents tell us about the global consensus of principles and issues that technologists, business leaders, policy-makers, and the public should be paying attention to, and what can we learn about the motivations behind the development of these documents? In Schiff, Laas, Biddle, and Borenstein’s Chap. 7, “Global AI Ethics Documents: What They Reveal About Motivations, Practices, and Policies,” the authors provide a review of current literature around AI ethics guidelines, delve into the issues of representation and power in the production of these documents, and explore why specific organizations go to great lengths to produce these guidelines. Then, the authors discuss why a more participatory process in developing these guidelines is likely to lead to more benefits for society at large. It is not only the motivations of the authors of a code that help shape ethical guidelines. Social forces, too, may play a massive role in how these guidelines are drafted, accepted, and utilized by a particular profession or group of practitioners. Qin Zhu’s Chap. 9, “Technocracy, Public Optimism, and National Progress: Constructing Ethical Guidelines for Responsible Nano Research and Development in China,” is an excellent example of how the development of guidelines influences what voices are heard in the discussion, what ethical issues are highlighted (and left out), as well as how scientists utilize these guidelines. In the chapter, Zhu traces how the ideology of technological optimism has encouraged members of the public to rely on experts and the state to ensure the ethical governance of emerging technologies like nanotechnology. The concluding chapter of this part traces changes in biomedical ethics in China from one that focuses on the writings of famous physicians to a more unified understanding of professional medical ethics at a professional and institutional level. Zhang, Sha, and Gao’s chapter follows the development of two sets of ethical guidelines overseeing the use of human subjects in research and the use of human embryonic stem cells. The authors provide a detailed analysis of the strengths and weaknesses of these guidelines, as well as measures taken by the Chinese government to meet these challenges and strengthen the oversight of research involving human subjects and embryonic stem cells.

II  Ethics Codes, Emerging Technologies, and International Viewpoints

99

References Chatila, Raja, and John C. Havens. 2019. The IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems. In Robotics and Well-Being, eds. Maria Isabel Aldinhas Ferreira, João Silva Sequeira, Gurvinder Singh Virk, Mohammad Osman Tokhi, and Endre E. Kadar, 11–16. Cham: Springer. Floridi, Luciano., Josh Cowls, Monica Beltrametti, Raja Chatila, Patrice Chazerand, Virginia Dignum, Christoph Luetge, Robert Madelin, Ugo Pagallo, Francesca Rossi, Burkhard Schafer, Peggy Valcke and  Effy Vayena. 2018. AI4People—An Ethical Framework for a Good AI Society: Opportunities, Risks, Principles, and Recommendations. Minds & Machines 28: 689–707. https://doi.org/10.1007/s11023-­018-­9482-­5. IEEE. 2016, 2019. Ethically Aligned Design. https://standards.ieee.org/content/dam/ieee-­standards/ standards/web/documents/other/ead_v2.pdf.

Chapter 6

The Significance of Professional Codes and Ethical Guidelines in Difficult Clinical Situations Charlotte Wetterauer, Jan Schürmann, and Stella Reiter-Theil

Abstract  When clinicians face ethical challenges, they may refer to professional codes and ethical guidelines for support. Whether and how professional codes and ethical guidelines have the function or potential to give ethical orientation and support to clinical care will be the core question of this chapter. The analysis will be carried out on the basis of an overview of the framework of Civil Law – as one form of codification – with a focus on recent legislation in Switzerland, also referring to international codes and guidelines. Cardio-pulmonary resuscitation (CPR) is one of the most dramatic interventions in clinical care. Its success is based on numerous preconditions, both technical and logistical. It does, however, trigger severe ethical and legal complications as the decision making must take place very fast and has long-lasting effects (Mentzelopoulos et al. 2018). Clinical staff, therefore, needs to rely on guidance for security and prevention of errors. Several (inter)national guidelines of mostly medical content exist offering clinical information and algorithms for CPR, but little ethical support. This book chapter describes the development of a resuscitation guideline for a Swiss University Hospital. Challenges and obstacles for ethical support during the development will be reported and discussed. It concludes by considering what significance existing normative documents, especially ethical guidelines, have in general and with regard to the chosen clinical focus of CPR.  Concerning the newly introduced ethics policy on CPR, ‘points to consider’ should help institutions creating ethical support helping physicians and patients tackle ethical challenges in clinical everyday life. Keywords  Ethical code · Ethical guideline · Ethics policy · Clinical ethics support · Cardio-pulmonary resuscitation

C. Wetterauer (*) · J. Schürmann · S. Reiter-Theil Department of Clinical Ethics, University Hospital of Basel/University Psychiatric Clinics Basel/Geriatric University Medicine FELIX PLATTER Basel, Basel, Switzerland e-mail: [email protected]; [email protected]; [email protected] © Springer Nature Switzerland AG 2022 K. Laas et al. (eds.), Codes of Ethics and Ethical Guidelines, The International Library of Ethics, Law and Technology 23, https://doi.org/10.1007/978-3-030-86201-5_6

101

102

C. Wetterauer et al.

6.1  Background and Introduction During the last decades, agreement has increased that in clinical patient care, ethical issues are many, both frequent and of many kinds. Related questions concern ethical uncertainty as well as conflict between competing ethical obligations (dilemmas) experienced by health care professionals (HCP), disagreement between those involved in ethical decisions as well as ethical challenges in daily care arising from patients who are neither able nor willing to contribute to therapeutic success. In situations such as these, support and guidance may be sought in professional codes of ethics and ethical guidelines; so, medical guidelines or directives might as well offer comments on ethical issues. In the realm of research with humans an increase of “normative documents” as we may call the variety of codes and guidelines has been observed since the beginning of the twentieth century with a steep rise occurring after the Nuremberg Medical Trial and the resulting Nuremberg Code in 1947 (Tröhler et al. 1998). A pioneering paper comparing British, German and Swiss end-of-life guidelines (Bartels et al. 2005) suggested insufficient reception by clinical staff. A Swiss survey confirmed this conclusion: medical-ethical guidelines do not reach their target groups to the desired degree and cannot, thus, be acknowledged or applied in practice appropriately. The survey revealed that the content of the revised end-of-life guideline of the Swiss Academy of Medical Sciences (SAMS) was known to only a minority of the responding physicians (Pfister 2010). Also, evaluative research on such “normative documents” and especially on their implementation is still scarce (Meyer-Zehnder et al. 2017; Pfister and Biller-Adorno 2010; Strech and Schildmann 2011). With the increasing implementation of clinical ethics support, “normative documents” may acquire a new and practical relevance: to give orientation and to serve as cornerstones of ethical reasoning in difficult situations and to improve the ethical quality of the respective processes of discourse.

6.2  Normative Framework 6.2.1  Law In the field of medicine, central legal values and norms such as the protection of life or respecting personal freedom are in the foreground. Obviously, even such basic ideas can lead to legal and moral conflict and dilemma. This is one reason why clinicians feel a particularly strong need for clear legal regulations. These should show the individual what options for action are legally permissible in a given situation. However, also socio-political issues in medicine that are discussed controversially, require legal clarification. Here, the law should reflect moral considerations already existing in society that originally stimulated the legislation. In the medical profession, the prevention of litigation can be a strong driver for a serious risk-avoiding

6  The Significance of Professional Codes and Ethical Guidelines in Difficult Clinical…

103

attitude called “defensive medicine”; the latter is being criticized for prioritizing the physician’s legal safety over the patient’s best interest. While Case Law prevails in the US and the Commonwealth countries, legal regulations in much of Europe and Switzerland are structured hierarchically. The primary legal sources go back to general laws rather than to a judicial decision of a specific case (casuistry). In the hierarchy of laws, the legal regulations are usually more concrete, the further down they stand. In Switzerland, constitutional law stands at the highest level being the law of the Federal Constitution and individual cantonal constitutions. Provisions in this law are mostly of a general nature, constituting the basis for further legislation. Regarding the field of medicine, the right to life and the right to personal freedom, also guaranteed in the European Convention on Human Rights, are of particular importance in constitutional law. At the same level as the constitution law stand state treaties, in particular, bilateral agreements between Switzerland and the European Union (EU). At the next level stand the Swiss Civil Code (SCC SR 210) and the Criminal Code (SR 311), which set standards for the abortion issue or medical confidentiality. In recent years, several laws regulating medical practice have been passed by the Swiss federal government, such as the reproductive law, the sterilization law or the transplantation law. At the same level, further legal regulations exist that have a general scope and are most relevant to medicine, especially the recent Child and Adult Protection Law of 2013 that can be found in the Swiss Civil Code. This law will be discussed in more detail below as it regulates – and strengthens – patient rights and self-determination.

6.2.2  Soft Law and Guidelines In addition to the binding legal regulations, so-called “soft law” is also very important in medicine, despite its weaker degree of normative validity. Hereunder fall non-state rules, which are not binding on state judges. Nevertheless, these regulations can have a strong influence on jurisdiction under certain circumstances. In medical law, national statutory regulation is of special significance. In Switzerland, the vast majority of physicians belong to the national professional organization Foederatio Medicorum Helveticorum (FMH). The FMH has issued a code of professional conduct in which many principles of medical activity are regulated, though usually only in general terms. In addition to medical statutory regulation common in most countries, existing medical-ethical guidelines play an important role. Historically, the first guidelines emerged in the first half of the nineteenth century, with the US taking a pioneering role in the development of guidelines (Institute of Medicine 2011). In Switzerland, it is primarily the Swiss Academy of Medical Sciences (SAMS) that takes the initiative and responsibility for publishing medical-­ ethical guidelines. These guidelines cover a large part of the medical-ethical discussion. Their intention is to improve patient treatment and to create a reliable and

104

C. Wetterauer et al.

easily accessible standard (SAMS 2013). In other countries, medical-ethical guidelines may be issued by the national medical associations themselves, e.g., the Bundesaerztekammer (German Medical Association). The weight of these national guidelines may justify calling them “directives.” But despite their enormous importance and impact on practice, they are not formally binding. In other words, such guidelines cannot be compulsorily enforced or sanctioned in case of non-compliance. In the interpretation of the law, however, they are of central importance, for example, when the Federal Supreme Court uses them as a benchmark to determine the state of the art in medical practice. However, as FMH incorporates almost all SAMS guidelines into the national statutory regulation, they are elevated to the level of a code of an association’s legal conduct. In addition, several cantons have declared certain guidelines as binding; those guidelines are, thus, elevated to the level of formal laws: within the canton, they obtain the same binding force as all other laws. Moreover, Clinical Guidelines exist, which also belong to private law. They formulate the usual standard of medical treatment according to the current state of the art, i.e., medical sciences. In contrast to the guidelines of the FMH or SAMS, which are set up interdisciplinary, Clinical Guidelines refer to a clearly identified and identifiable topic; they are therefore discipline-specific (Hostettler et  al. 2014). Additionally, the SAMS offers broad orientation in the area that is not regulated by law. In contrast, clinical guidelines support decisions and guide them directly. Furthermore, SAMS sees itself as the publisher of ethical guidelines, whereas Clinical Guidelines mainly take on ethical aspects only to a limited extent. In Germany, the Association of the Scientific Medical Societies (AWMF) should be mentioned in this context. For more than 20 years, the AWMF has been coordinating the development of medical guidelines for diagnosis and therapy by the various scientific medical societies. Today, medical guidelines have become indispensable in daily clinical practice. They exist at the national or international level. Studies show that high-quality medical guidelines can help to improve treatment processes, results, and quality (Hostettler et al. 2014). On the other hand, the continuous increase in the number of medical guidelines makes it difficult to keep a comprehensive overview and medical guidelines of poor quality can even have a negative impact on treatment quality (Shekelle et al. 2000). Also, it is not uniformly regulated who has the right to issue a directive and who may be a member of the working group. Terminology is an issue, too. As part of the soft law, existing normative documents are called guidelines, directives, standards, policies, recommendations or opinions. It has been suggested to categorize the terms according to the binding nature of the respective type of text; a guideline or directive stands above an evidence-based medicine guideline, which stands above recommendations, which stands above opinion or policy (German Medical Association 2019). Nevertheless, due to the existence of numerous regulations in practice, there are ambiguities on the binding effect (Bartels et al. 2005) and hierarchy, or on how to deal with conflicting regulations.

6  The Significance of Professional Codes and Ethical Guidelines in Difficult Clinical…

105

6.2.3  Codes In addition to laws and guidelines, ethical codes are also published with the intention to reach large audiences and create identification with moral values. Concerning medical practice, such codes have existed in the form of oaths, covenants, and prayers since antiquity, as ecclesiastic dogmas and laws since the Middle Ages, and as professional codes since the nineteenth century (Tröhler et al. 1998). These codes typically address the qualities of a “good doctor,” the healer- or the physician-­ patient-­relationship, and basic questions of medical practice at the beginning and at the end of life. Ethical codes, however, differ from soft law and guidelines by content. Moreover, ethical codes are usually written in general terminology claiming universal validity. Soft law, in general, does not primarily refer to ethical content, but may be related to it. More recently, especially since the turn from the 19th to the twentieth century, codes deal with questions of research (animal and human experimentation), patient rights such as self-determination, and specific forms of patient care (transplantation, gene therapy, reproductions techniques). Such documents cover intra- and inter- professional issues, e.g., the relationship among members of the various health professions and paramedical groups, as well as vis-à-vis society and the state (Tröhler et al. 1998). In the field of medical ethics, the Hippocratic Oath is the first fundamental formulation of an ethical code. The “Declaration of Geneva” of the World Medical Association (WMA) was expressed in the Hippocratic tradition partly to show that the Hippocratic Oath, when formulated in modern terms, could serve as the basis for medical ethics in the twentieth century (Leven 1998). In 2017, the oath was once again modernized by the WMA General Assembly. One of its major advancements is that patient autonomy has been included in the oath (Feldwisch-Drentrup and Zegelman 2017). In practice, an ethical code does not have legal force and, until today, only a relatively small number of medical associations worldwide use the “Declaration of Geneva” (Parsa-Parsi and Wiesing 2017). On the contrary, the oath can be regarded as a constitutional to which the doctors solemnly commit themselves. Through this act they commit themselves to make the attitudes formulated in the oath the basis of their medical actions and publicly acknowledge this (Egger et al. 2017).

6.3  Application to Difficult Clinical Situations 6.3.1  T  he Example of CPR: Regulation and Orientation through Law, Ethical and Medical Guidelines Among the most difficult clinical situations is cardio-pulmonary resuscitation (CPR) after cardiac arrest. Cardiac arrest is life-threatening and requires immediate and competent action. Unfortunately, the interventions often fail to reach the desired outcomes of stabilizing the patient; only under favorable conditions is a full

106

C. Wetterauer et al.

recovery possible. Expectations of both, laypersons and healthcare professionals are often less than realistic in this regard. The legal aspects of CPR-related ethical challenges will now be explained in more detail. In an emergency situation, the general obligation is to provide assistance according to current law. The obligation for resuscitation is not named in the law explicitly but is included in the obligation of emergency relief as derived from Art. 128 of the Swiss Criminal Code: Anyone who does not help a person, who is in imminent mortal danger, even though it could be reasonably expected, makes himself liable to prosecution. In doing so, physicians and health care professionals are obliged to meet higher requirements (compared to laypersons) according to their specialist knowledge. If in the emergency situation, the patient’s will is not known and the presumed will cannot be explored in time, the patient’s will to live has to be assumed; consequently, the healthcare professional has to act accordingly, i.e. to attempt resuscitation. A medical indication formulated by a physician for CPR may or may not be available in an emergency situation. On the other hand, a clear indication that a decisionally competent person refuses CPR may exist, which means that no CPR measures may be carried out. If such information becomes available only during the course of CPR measures, e.g., based on an advance directive or through a person authorized for substitute decision making, the CPR must be discontinued (see Art. 370 I, 372 II SCC). By the recent Child and Adult Protection Law, the legislator aims to ensure the patient’s greater involvement in decisions on medical treatment. As a result, the patient is legally granted the right to decide on medical measures, such as CPR. Here, the rules of informed consent and shared decision making play a major role. With the introduction of the Child and Adult Protection Law in 2013, the further development of informed consent has been manifested by the law towards an interpretation in the sense of shared decision making. The tasks formulated for this purpose, such as gaining data, informing the patient, responding to patient feelings and implement a treatment plan (Bird and Cohen-Cole 1990; Lipkin et al. 1995), seem to have served as a guiding principle for the new law. The current law does not only strengthen the autonomy of competent adults; it also aims at supporting the self-determination of patients who use advance care planning for situations when they have lost their decisional capacity. A person with capacity determines in advance agreement or disagreement with certain medical measures in the event of their inability to make decisions. With the help of this instrument, the right to self-determination can and should be preserved beyond the time of one’s own decision-making capacity. If a patient lacking capacity has not prepared any advance directive, the law grants the relatives certain rights of substitute decision making. However, even a patient with impaired capacity should be involved in communication and decision making about the treatment (Art. 377 III SCC). Further requirements of this article reflect the criteria of shared decision making for the treatment of incapacitated patients without an advance directive: i.e. the attending physician has to develop a treatment plan in consultation with the (authorized) substitute decision-maker. However, the current law does not contain explicit provisions on CPR decisions, but due to the basic idea of strengthening patient autonomy in general, also CPR-decisions must be part of the rights to be

6  The Significance of Professional Codes and Ethical Guidelines in Difficult Clinical…

107

ensured. There is no evidence or justification that this subject should be spared (Gadmer Mägli 2017). Following the new adult protection law, the SAMS saw a need for action to revise its existing CPR guideline (SAMS Resuscitation 2008, updated 2013 and 2021) and adapted it 2013 to the current legislation and 2021 to new scientific evidence. According to the SAMS, a CPR attempt is medically indicated, if there is a chance that a patient may survive cardiac arrest without severe neurological impairment. In patients at the end of life, on the other hand, CPR measures are not considered to be appropriate. The duty to respect patient autonomy requires that, if the patient so wishes, CPR should not be attempted, even in cases with good medical prognosis. The right to self-determination is very strong indeed regarding the expression of the individual’s negative liberty, i.e., to refuse or stop treatment. However, the right to self-determination does not extend to situations where the patient demands treatments that have no chance of success and are therefore medically not indicated. Conversations with patients concerning their treatment should, if possible, include the topic of CPR.  As this may trigger concern or anxiety, it should be carefully determined to what extent the patient wishes to address this topic. For practical reasons, however, it is not possible to discuss cardiac arrest and CPR with all patients, nor is it appropriate to do so in all cases. Patients should be informed that in the event of unexpected cardiac arrest, CPR measures are routinely initiated unless arrangements to the contrary have been made in advance. Patients should also be assured that in case they have formulated wishes to forego CPR, this would be respected. The CPR decision should generally be discussed within the care team and must be clearly indicated in the CPR-status as “CPR Yes” or “CPR No” in the patient’s medical and nursing chart. Furthermore, the CPR decision must be reviewed at regular intervals. Finally, in the event of dissent within the care team, the reasons need to be carefully explored. Different value systems and possible courses of action should be clarified. In cases of conflict, ethics support should be sought, either prospectively to prepare decisions or retrospectively, even after an event in order to prevent persistence or escalation of problems. Numerous national and international guidelines exist on CPR, mirroring the difficulty and complexity of the matter. But they coexist next to each other and mostly have no relation to one another. The different guidelines cover the technical possibilities, medical success, and ethical aspects of CPR. Among international guidelines worth mentioning are the Guidelines for CPR 2015 of the American Heart Association (see summary Neumar et  al. 2015), the Guidelines CPR 2015 of the European Resuscitation Council (see summary Monsieurs et  al. 2015), and International Consensus on Cardiopulmonary Resuscitation and Emergency Cardiovascular Care (see summary Soar et al. 2018).

108

C. Wetterauer et al.

6.3.2  T  he Development from a Clinical Ethics Policy to a Hospital Directive After the enactment of the revised Civil Law, the awareness of clinical and ethics staff in the University Hospital Basel (USB), especially its Ethics Advisory Board, focused on the newly regulated and strengthened patient rights including the regulation of substitute decision making by patient relatives in case of patient decisional incapacity (Gadmer Mägli 2017, Hermann et  al. 2014). The changes were interpreted as a move towards showing more respect for patient autonomy. However, putting relatives directly into a clinically crucial position to decide about complex interventions made some clinicians anticipate complications and conflicts at the bedside. This reaction was observed especially when discussing the implementation of patient rights in critical care medicine. Health care professionals and ethicists agreed that not all patient relatives are well prepared to take such decisions, and the responsibility might overburden them, e.g., relatives who were suffering from health problems themselves. Moreover, concerns arose about the capability of relatives to consider the presumed patient’s wishes (autonomy) and best interest in a valid and reliable manner (Reiter-Theil and Schürmann 2018, Hauke et  al. 2011). In the Institution’s service of clinical ethics consultation (Reiter-Theil and Schürmann 2018) various consultations showed that an “automatic” representation of the incapacitated patient’s wishes by the spouse or other family members (formally entitled to function as substitute decision-maker by law) was held by some health care professionals to fail in taking the documented wishes, the presumed patient will, or the interests of the patient into account appropriately. CPR and Do Not Attempt Resuscitation (DNAR)-orders, in particular, triggered ethical awareness and controversy among clinical University Hospital Basel staff. The controversy became visible when the SAMS revised its National Guidelines on CPR in 2013. While the law provided the ground for the national guidelines to state more or less explicitly that the patient has a right to be informed about CPR and DNAR and should have a say about his or her treatment, including CPR, this “respect-for-autonomy” approach was not adopted by all critical care clinicians. Their criticism was that for many or most patients the decision to perform CPR in order to sustain their life was not to be questioned: Any conversation on CPR or DNAR with patients and on their wishes in this regard could, in their view, only harm the patients by frightening them unnecessarily. They claimed that most patients in a medical condition where DNAR might be an option as CPR appeared unsuccessful, would be overburdened by such a difficult conversation anticipating a less than acceptable outcome, e.g., survival in a vegetative state or death. The controversy about shared decisions making with patients about CPR and DNAR fills numerous publications and has triggered projects to enhance advanced care planning (e.g., Mentzelopulos et  al. 2018). However, no hospital-wide initiative had been taken on behalf of “ethics” to this point. When the topic was put on the agenda of the Ethics Advisory Board in 2015, it became clear that substantial debate was still to be anticipated: Multiple experiences were exchanged about diverging and inconsistent practices observed at the bedside.

6  The Significance of Professional Codes and Ethical Guidelines in Difficult Clinical…

109

As a conclusion, it was decided to proceed as follows: 1 . Collect situations regarding ethical problems with CPR or DNAR 2. Evaluate ethical, legal and practical implications 3. Design an ethics policy providing a framework for orientation 4. Develop an algorithm for determining the CPR status 1. Collected situations regarding ethical problems with CPR or DNAR –– It is reported that CPR was performed in a patient who had previously objected to being resuscitated. –– A nurse is convinced that she may (and ought to) abstain from performing CPR in a certain situation disregarding the physician’s order. –– The CPR status of patients is reported to be often lacking or not updated, including the failure to explore patient wishes and check the existence of advance directives upon admission. –– A CPR decision is taken that the patient, as the nurse notices, does not understand or does not accept. The nurse asks what the duties of the physician, especially regarding patient information, are. –– It is not clear when the physician may abstain from informing the patient about the decision made regarding CPR. –– A patient requests CPR that appears medically futile. It is unclear how far the physician is obliged to fulfill patient wishes, even those that do not make sense. –– It is observed that the legal framework is being misinterpreted in the sense that “everything must be done” as far as requested by the patient’s relatives, even in cases of medical futility and against the opinion of the clinical team. It is discussed how such controversies shall be handled. As the selection of clinical problems showed a wide variety of ethical issues concerning CPR, an effort was made to formulate systematic constellations. Key criteria were: anticipated outcome of CPR (prognosis), patient wishes pro or con CPR, patient decisional capacity, advance directive, substitute decision-maker. 2. Evaluation of ethical, legal and practical implications Analyzing the above narrative list of problematic CPR situations, three main focuses can be identified: –– “Autonomy”: insufficient respect for patient wishes, inappropriate patient information as well as incomplete informed consent or shared decision making; problems with the involvement of relatives; insufficient acknowledgment of advance directives; –– “Obligations”: conflict about the respective professional roles and obligations of physicians and other HCP, especially nurses; –– “Futility”: disagreement about medical prognosis or usefulness of CPR. “Autonomy”, the first complex, seems to be related directly to the legal framework of the civil code, i.e. its changes. While physicians are expected to be informed about legal requirements relevant to their work, not all seem to comply with the

110

C. Wetterauer et al.

respective norms. This complex appears black-and-white in the sense that obeying the law would solve most problems by following the ethical principle to respect patient autonomy – though it appears evident that this kind of simplistic solution is not real. The second complex “Obligations” addresses the problem that nurses report observing inappropriate management of CPR decisions among physicians, wish to express their opinions and draw their own conclusions. Issues such as these cannot directly be solved by consulting the law alone. The professional role of nurses has itself been changing and been associated with increasing responsibility that may not yet be completely accepted by medical colleagues. In the third complex, “Futility”, neither law nor ethics are in the center of concern. Rather, it is clinical medicine itself. Prognosis in general and specifically of CPR outcomes is a difficult matter that is associated with inevitable uncertainty and controversy. Ethics enters the problematic when it comes to making critical decisions on the basis of probabilities. Making value judgements on a good life or a good dying process are the patient’s privilege, but the value assumptions of the health care professional also influence clinical judgement. Possible problem constellations were collected on the basis of narrative case studies. These were reduced to short vignettes to allow for formulating procedural recommendations according to the legal framework and ethical considerations. 3. Designing an ethics policy providing a framework for orientation On the one hand, the legal framework of the civil law allows its application to CPR situations with the consequence that patient rights, i.e., “Autonomy,” should be respected. On the other hand, the mere existence and simple communication of the legal framework, e.g., through training, obviously did not suffice to this end. Moreover, the problems around professional “Obligations” and “Futility” appeared to require a different approach. In clinical medicine, following medical guidelines based on best available medical evidence has gained increasing importance in the effort to give orientation and guarantee the best possible treatment. There has been a lively discussion concerning whether and how ethical guidelines might acquire a similar quality (Reiter-Theil et al. 2011, Strech and Schildmann 2011, Mertz and Strech 2014). The legitimation of ethical guidelines through various channels like research and knowledge, ethical consistency, and consensus-building was suggested (Reiter-Theil et al. 2011). Ethical guidelines can be issued by national organizations such as the SAMS. They can also relate to institutions explaining and regulating certain requirements very specifically as ethics policies (Barandun Schäfer 2011, Winkler 2005, Winkler et al. 2012). The following example covers a variety of issues for illustration. Discussion of real examples is an important part of the methodological steps towards developing the ethics policy as the examples highlight open questions and allows for putting the draft policy to the practical test.

6  The Significance of Professional Codes and Ethical Guidelines in Difficult Clinical…

111

Box 6.1 Example A demented patient in her late nineties, apparently with no relatives, is transferred to the Hospital from a nursing home. Her professional substitute is only responsible for financial matters, not for medical decisions. After 5 days, she is moved to the intensive care unit (ICU) due to pulmonary problems. Her CPR and ICU status is, according to the patient chart, YES. However, exploration revealed later that the patient held a valid advance directive (available from the nursing home on demand) confirming the opposite: CPR and ICU status NO, palliative care only. Comment. The CPR and ICU status of this palliative patient had not been clarified and documented upon admission. Rather, “YES” had been noted in an automatic routine. When facing her dramatic health deterioration and her unclear CPR status, the attending physician initiated diagnostic measures. One hour later the information “NO CPR” was found in a previous document. Another hour later the senior consultant appeared and changed the CPR status to NO. Only then were the existing relatives contacted and informed. After 2 h, the patient died. It was questioned whether this patient was allowed “dying with dignity”; also the late involvement of the relatives was criticized. The existing advance directive was not available when it was needed. Problems such as this one were estimated to occur once a month. The example given above illustrates how patient wishes may be unknown or overridden in certain (critical) situations. This does happen even though the law and national medical-ethical guidelines provide the normative background for respecting patient rights and making appropriate decisions. The law does acknowledge advance directives when they are applicable to the decision to be taken. Unavailability of the advance directive upon admission and, thus, in the critical situation, could and should be prevented. Both, the transferring and the accepting institution have the responsibility to check the existence of an advance directive. All clinical staff has access to regular ethics training. However, while courses informing about the revision of the civil law are being offered on an obligatory basis, participation in ethics is still voluntary.

Clarification is needed on the following issues: Who should check the CPR status and when: is this the physicians’ duty or may nurses help? How should the availability of an advance directive be guaranteed and by whom? Should a patient’s wishes regarding CPR be inquired into upon admission on a regular basis, even if this might overburden the patient? Can patient groups be identified who should not be asked for their preference regarding CPR status?

112

C. Wetterauer et al.

Into which documentation system (among those available in practice) should the CPR status be documented? What is the general attitude of the institution: if the CPR status is not available, should CPR be carried out in uncertain situations? On the basis of further consultation and several rounds of editing drafts, the policy text was established. Box 6.2 Ethics policy: Principles relating to CPR decisions* Ethics policy: Principles relating to CPR decisions 1. Within 24 h after hospital admission, each (in-) patient – or, if necessary, his or her legal representative – will be asked if an advance directive with confirmed validity exists. 2. Patients are informed explicitly (by means of written patient information or orally) according to the guidelines of the SAMS that necessary resuscitation measures are generally carried out in all patients in this institution. They will be reminded that they can demand refusal of resuscitation at any time. 3. Within 6  h after admission, the treatment team should decide for each patient whether a review of the resuscitation status is necessary and/or document that the patient refuses resuscitation either by will, by advance directive, or by legal representative. (Documentation according to point 4) 4. The decision about “CPR yes or no”, as well as the “presence of an advance directive yes or no” will be uniformly documented throughout the entire institution. 5. The currently set CPR status is to be respected. 6. If changed, the CPR status should be adjusted according to the course of treatment and the patient’s expressed preferences. 7. DNAR emblems of any kind (e.g. tattoo, skin tag, necklace pendant, etc.) do not have the legal force of an advance directive but are to be understood as a strong indication of the existence of such. 8. On the wards, members of the treatment team should know the CPR status of the patient. 9. When transferring patients, respective departments should ensure that the patient’s advance directive and information about CPR status are forwarded. *Translated by the authors from the German original.

Shortly after submission of the policy, the Hospital Board of Directors approved the text and welcomed it as a very valuable recommendation. Given the normative basis of the policy’s content – legal framework, medical and ethical guidelines – a “recommendation” appeared too weak an instrument for implementation. Another proposal was, thus, brought to the Board, suggesting that the document needed to be

6  The Significance of Professional Codes and Ethical Guidelines in Difficult Clinical…

113

binding to be consistent with its content. The proposal was accepted, and the policy was promoted to being a binding institutional “Reglement,” i.e., a Hospital Directive. 4. Developing an algorithm for determining the CPR status In order to reduce complexity and offer a tool that was more familiar to clinicians for practical use, an algorithm for determining the CPR status in the individual patient was developed (Fig. 6.1). The algorithm is supposed to support physicians in making decisions about the CPR status of all patients in the hospital. It integrates information on legal and ethical requirements and, thus, should strengthen the patients’ rights to participate in shared decision making. At the same time, medical and practical aspects are being acknowledged in the algorithm. To create the algorithm, the previously analyzed systematic constellations of CPR situations and their key criteria were transferred into a flow chart. The flow chart was refined by the clinical ethics team. Several versions of the algorithm were discussed in the Ethics Advisory Board until all members could approve the final version. The algorithm was published on the hospital website, recommending its use and encouraging feedback by health care professionals. The algorithm is accompanied by a short instruction manual referring to the established CPR Hospital Directive. The care team should upon patient admission clarify whether the resuscitation status has to be discussed and/or whether the patient wants to forego CPR. The algorithm supports health care professionals in reflecting and preparing CPR decisions – it does not replace the thorough consideration of the medical indication and the confidential conversation between physician and patient. The resulting CPR status has to be documented in the hospital’s system. In case of relevant changes concerning the medical indication or the patient’s wish, the CPR status has to be re-evaluated and documented. The need for careful and timely preparation of the decisional process is emphasized. Nurses can play an important role in this process (Bjorklund and Lund 2019). An ethics consultation can be requested by any person involved in the process at any time. Internal implementation includes repeated written top-down communication through all clinical departments for physicians and nurses; moreover, oral presentation of Policy/Directive and algorithm takes place within ethics training on clinical units. Extension of these activities and an evaluation of the implementation are planned.

6.4  Discussion and Outlook Codes have a special status because they are associated with historic events or political ideas of universal significance. In health care, the solemn nature of the Hippocratic Oath or the Declaration of Geneva is part of the medical tradition: “The Declaration of Geneva is one of the World Medical Association’s (WMA) oldest policies adopted by the second General Assembly in Geneva in 1947. It builds on

114

Fig. 6.1  Algorithm for Determining CPR Status

C. Wetterauer et al.

6  The Significance of Professional Codes and Ethical Guidelines in Difficult Clinical…

115

the principles of the Hippocratic Oath, and is now known as its modern version.”1 We may assume that solemnity, especially in the form of swearing an oath, is a component of a universal identification of the individual with moral values and norms that is considered important to guide and nurture professionals through their demanding carrier. It was suggested that since the twentieth century, especially since the Nuremberg Code was published, a proliferation of codes and guidelines took place in Western countries (Tröhler et al. 1998). This increase is not automatically connected with strengthening consistent ethical orientation in the target area of practice. Rather, clashes of values expressed in codes, or their conclusions, are possible if not probable (Eriksson et al. 2007). One of the reasons for this observation is that codes, guidelines, and policies are not “coordinated” amongst each other by some ‘world forum’. Another, more basic reason is that such coordination might restrict the liberty of engagement. The ‘Convention for the Protection of Human Rights and Dignity of the Human Being with regard to the Application of Biology and Medicine: Convention on Human Rights and Biomedicine (ETS No 164)’ can be seen as an effort towards such coordination in Europe. We ask whether a highly professionalized field of practice such as health care needs some kind of coordination regarding the development of codes and guidelines, especially in more practical areas such as clinical medicine or resuscitation. Such coordination is supposed to bring about consistency and allow for ethical consensus-building concerning questions that are important for those involved. It has been shown for the Swiss context that the law, even a comprehensive one, cannot completely prevent uncertainty or conflict about ethical issues arising in difficult clinical situations. This appears to be true even though the civil law of 2013 is up-to date and rather specific in its effort to strengthen patient autonomy and key values around patient rights; the law also clarifies the decision making processes when patients lose decisional capacity. Guidelines, national or supranational, are issued by professional, interdisciplinary or private organizations in order to complement the legal framework. Some guidelines are directly aiming at the solution of ethical problems and may even carry “ethics” in their name. However, they too, cannot guarantee full prevention of ethical trouble. General provisions of laws, directives, or regulations of professional societies are usually kept general so that they do not permit being applied directly to the individual case, nor do they show clear solutions for specific situations. The more abstract rules and guidelines are, the less they seem to reach and influence their target professionals. This means that health care professionals have to perform their own transfer or even interpretation of the general norms and apply them to their problem at hand. Variability and, thus, a lack of reliability of interpretation are likely. An ethics policy, like the one developed here, though restricted in validity to its institutional context, can certainly help the individual health care professional with translation, due to its more detailed nature.

 https://www.wma.net/what-we-do/medical-ethics/declaration-of-geneva/

1

116

C. Wetterauer et al.

As an example, it should be noted that for practical reasons, the SAMS guideline does not consider it possible or useful to involve all patients in conversations about cardiovascular arrest and CPR. The guideline allows for a range of possible judgments and actions that give room for interpretation. In contrast, the policy determines that the treatment team will decide for each patient within 6 h after admission whether a review of the resuscitation status is necessary and/or determine that the patient refuses resuscitation either by direct expression of will, by advance directive, or through substitute decision-maker. The accuracy of this request makes it easier for health care professionals to know what exactly is expected of them and what their tasks and obligations are. Policies created for certain institutions have the chance of being known by the staff as they are implemented accordingly. The situation is different with the numerous coexisting guidelines: Which guideline is to be used for this case? Which guideline has priority? The unmanageable number of guidelines makes it difficult even for an interested person to keep track. This seems to be independent of whether institutional or case law prevails. However, there are also points that even a more concrete policy cannot clarify definitively: How to decide about the status of resuscitation if there is disagreement in the team? Is it a conscious decision of the individual or a hierarchical decision? What about dealing with a cardiac arrest during surgery; is this a unique situation where CPR is always mandatory, or can/should CPR be omitted in this case? Situations such as these triggering ethical issues require being addressed by an appropriate procedure as applied in ethics consultation with a clear approach, fair rules of discourse, and efficient handling (Reiter-Theil and Schürmann 2016). Content-wise disagreements are inevitable elements of clinical daily routine and by no means exceptional or problematic. They should rather be welcomed as expressions of free deliberations and discussion mirroring a wide range of valuable perspectives to be considered in decision making. On this background, the variety of normative documents may be preferable to any “organized” or “steered” harmonization that might inhibit the expression of thoughts. However, medicine and healthcare are disciplines of action, and clinical ethics support should help to maintain the capability to act on the basis of valid knowledge and reflection. Institutional ethics policies cannot claim to be universal or even general in their meaning and application. But that weakness may be considered as their strength: while they themselves need to be constructed within the limits of the legal framework, and should certainly not contradict the most prominent codes of the field, they must and can spell out how general legal norms and ethical principles may be interpreted in a concrete situation. Interestingly enough, the high validity of legally justified norms sometimes seems not to be as effective as immediate directives formulated by an employer institution (hospital): Institutional ethics policies can help in finding solutions to problems that occur repeatedly, but still require case-wise handling. The escalation or limitation of critical care interventions is an example where guidance by general rules – or a policy – is needed, but does not replace thorough deliberation in the individual situation.

6  The Significance of Professional Codes and Ethical Guidelines in Difficult Clinical…

117

With Clinical Ethics Support Services (CESS) developing, at least in many major medical centers and ethical case consultation being available, clinical staff has the opportunity to get interdisciplinary advice for difficult decisions. CESS can be seen as a response to the increasing number of those participating in making treatment decisions including end-of-life care and, of course, also to the growing complexity of such decisions. CESS and Hospital Ethics Committees may play an important role in developing and reviewing institutional ethics policies – a process for which qualitative standards are now being developed (Frolic et al. 2012; 2013). In return, Hospital Ethics Committees may strengthen their ethical competence by consulting and reviewing ethics policies. In ethics consultations, ethics policies may provide orientation and argumentation relevant to the ethical issue in question.

6.5  Points to Consider • Laws and ethical guidelines are often too general to be applied easily in clinical practice by health care professionals – institutional ethics policies can help to translate and apply general norms to clinical situations. • Resources are to be provided for the implementation of ethics policies and education of clinical teams. • Monitoring and evaluating the implementation of ethics policies are paramount and require specific resources. • Laws and guidelines alone appear less than sufficient to serve as relevant support for health care professionals’ decision making. Among the reasons for this are that their wording is rather abstract and that they do not reach their target groups. In order to function as helpful support, guidelines (e.g., through appropriate appendices or tools) should be concrete enough to be applicable on a single case basis. • The proliferation of existing guidelines mirrors democratic freedom of expression and represents an achievement of science as well. On the other hand, such proliferation of guidelines limits their success; too many guidelines render their reception increasingly difficult, lead to contradictions, and new ambiguities. Acknowledgements  The authors thank the members of the USB Ethics Advisory Board for contributing to the Policy. Dr. Wiebke Paulsen, MA, has made valuable contributions to the algorithm. For their helpful support the authors thank Angelika Markaj, MSc, and Dr. Kristina Würth, lic. Phil.

118

C. Wetterauer et al.

References Barandun Schäfer, Ursi. 2011. Ethische Orientierungshilfe bei Widerstand von schwerkranken Menschen gegen Pflegemassnahmen  – am Beispiel der Mundpflege. In Ethikkonsultation heute – vom Modell zur Praxis, ed. Ralf Stutzki, Kathrin Ohnsorge, and Stella Reiter-Theil, 185–197. Zürich: LIT Verlag. Bartels, Sandra, Mike Parker, Tony Hope, and Stella Reiter-Theil. 2005. Wie hilfreich sind ethische Richtlinien am Einzelfall? Ethik in der Medizin 17 (3): 191–205. https://doi.org/10.1007/ s00481-­005-­0378-­6. Bird, J., and S.A. Cohen-Cole. 1990. The three-function model of the medical interview. An educational device. Advances in Psychosomatic Medicine 20: 65–88. Bjorklund, Pamela, and Denise M.  Lund. 2019. Informed consent and the aftermath of cardiopulmonary resuscitation: Ethical considerations. Nursing Ethics 26 (1): 84–95. https://doi. org/10.1177/0969733017700234. Egger, Bernhard., Ruth Baumann-Hölzle, Max Giger, Claudia Käch, Audrey Kovatsch, Diana Meier-Allmendinger, Judit Pòk Lundqvist, Pascal. Schai, and Jean-Pierre Wils. 2017. Der «Schweizer Eid». Schweizerische Ärztezeitung 98 (40): 1295–1297. Eriksson, Stefan, Gert Helgesson, and A.T.  Höglund. 2007. Being, Doing, and Knowing: Developing Ethical Competence in Health Care. Journal of Academic Ethics 5: 207–216. https://doi.org/10.1007/s10805-­007-­9029-­5. Feldwisch-Drentrup, Hinnerk, and Anne Zegelman. 2017. Hippokratischer Eid - Deklaration von Genf kommt in der Gegenwart an. Ärzte Zeitung online. Frolic, Andrea N., Katherine Drolet, and H.  H. S.  Policy Working Group. 2013. Ethics policy review: a case study in quality improvement. Journal of Medical Ethics 39 (2): 98–103. https:// doi.org/10.1136/medethics-­2011-­100461. Frolic, Andrea, Katherine Drolet, Kim Bryanton, Carole Caron, Cynthia Cupido, Barb Flaherty, Sylvia Fung, and Lori McCall. 2012. Opening the black box of ethics policy work: evaluating a covert practice. The American Journal of Bioethics 12 (11): 3–15. https://doi.org/10.1080/ 15265161.2012.719263. Gadmer Mägli, Ursula. 2017. Entscheidungen über den Reanimationsstatus im Kindes- und Erwachsenenschutz. Zeitschrift für Kindes- und Erwachsenenschutz: 104–125. German Medical Association. Verbindlichkeit von Richtlinien, Leitlinien, Empfehlungen und Stellungnahmen. Hauke, D., S. Reiter-Theil, E. Hoster, W. Hiddemann, and E.C. Winkler. 2011. The role of relatives in decisions concerning life-prolonging treatment in patients with end-stage malignant disorders: informants, advocates or surrogate decision-makers? Annals of Oncology 22 (12): 2667–2674. https://doi.org/10.1093/annonc/mdr019. Hermann, H., M. Trachsel, C. Mitchell, and N. Biller-Andorno. 2014. Medical decision-making capacity: knowledge, attitudes, and assessment practices of physicians in Switzerland. Swiss Medical Weekly 144: w14039. https://doi.org/10.4414/smw.2014.14039. Hostettler, Stefanie, Esther Kraft, and Christoph Bosshard. 2014. Grundlagenpapier der DDQ Guidelines – Qualitätsmerkmale erkennen. Schweizerische Ärztezeitung 95 (3). Institute of Medicine. 2011. In Clinical Practice Guidelines We Can Trust, ed. Robin Graham, Michelle Mancher, Dianne Miller Wolman, Sheldon Greenfield, and Earl Steinberg. Washington, DC: The National Academies Press. Leven, Karl-Heinz. 1998. The Invention of Hippocrates: Oath, Letters and Hippocratic Corpus. In Ethics Codes in Medicine: Foundations and Achievements of Codification since 1947, ed. Ulrich Trohler and Stella Reiter-Theil, 3–23. Brookfield: Ashgate Pub. Lipkin, Mack, Samuel M.  Putnam, and Aaron Lazare. 1995. The Medical Interview : Clinical Care, Education, and Research, Frontiers of primary care. Mentzelopoulos, Spyros D., Anne-Marie Slowther, Zoe Fritz, Claudio Sandroni, Theodoros Xanthos, Clifton Callaway, Gavin D.  Perkins, Craig Newgard, Eleni Ischaki, Robert Greif,

6  The Significance of Professional Codes and Ethical Guidelines in Difficult Clinical…

119

Erwin Kompanje, and Leo Bossaert. 2018. Ethical challenges in resuscitation. Intensive Care Medicine 44 (6): 703–716. https://doi.org/10.1007/s00134-­018-­5202-­0. Mertz, Marcel., and Daniel Strech. 2014. Systematic and transparent inclusion of ethical issues and recommendations in clinical practice guidelines: a six-step approach. Implementation Science 9: 184. https://doi.org/10.1186/s13012-­014-­0184-­y. Meyer-Zehnder, Barbara., Heidi A. Schleger, Sabine Tanner, Valentin Schnurrer, Deborah R. Vogt, Stella Reiter-Theil, and Hans Pargger. 2017. How to introduce medical ethics at the bedside Factors influencing the implementation of an ethical decision-making model. BMC Medical Ethics 18. https://doi.org/10.1186/s12910-­017-­0174-­0. Monsieurs, Koenraad G., Jerry P.  Nolan, Leo L.  Bossaert, Robert Greif, Ian K.  Maconochie, Nikolaos I.  Nikolaou, Gavin D.  Perkins, Jasmeet Soar, Anatolij Truhlar, Jonathan Wyllie, David A.  Zideman, and ERC Guidelines 2015 Writing Grp. 2015. European Resuscitation Council Guidelines for Resuscitation 2015 Section 1. Executive summary. Resuscitation 95: 1–80. https://doi.org/10.1016/j.resuscitation.2015.07.038. Neumar, Robert W., Michael Shuster, Clifton W.  Callaway, Lana M.  Gent, Dianne L.  Atkins, Farhan Bhanji, Steven C. Brooks, Allan R. de Caen, Michael W. Donnino, Jose Maria .E. Ferrer, Monica E.  Kleinman, Steven L.  Kronick, Eric J.  Lavonas, Mark S.  Link, MaryE.  Mancini, Laurie J. Morrison, Robert E. O'Connor, Ricardo A. Samson, Steven M. Schexnayder, Eunice M. Singletary, Elizabeth H. Sinz, Andrew H. Travers, Myra H. Wyckoff, and Mary F. Hazinski. 2015. Part 1: Executive Summary 2015 American Heart Association Guidelines Update for Cardiopulmonary Resuscitation and Emergency Cardiovascular Care. Circulation 132 (18): S315–S367. https://doi.org/10.1161/Cir.0000000000000252. Parsa-Parsi, Ramin, and Urban Wiesing. 2017. Revision des ärztlichen Gelöbnisses. Deutsches Ärzteblatt 114 (44). Pfister, Eliane. 2010. Die Rezeption und Implementierung der SAMW-Richtlinien im medizinischen und pflegerischen Alltag. Schweizerische Ärztezeitung 91 (13/14). Pfister, Eliane, and Nikola Biller-Adorno. 2010. The reception and implementation of ethical guidelines of the Swiss Academy of Medical Sciences in medical and nursing practice. Swiss Medical Weekly 140 (11–12): 160–167. https://doi.org/10.5167/uzh-33163 Reiter-Theil, Stella, Marcel Mertz, Jan Schurmann, Nicola Stingelin Giles, and Barbara Meyer-­ Zehnder. 2011. Evidence  - competence  - discourse: the theoretical framework of the multi-­ centre clinical ethics support project METAP. Bioethics 25 (7): 403–412. https://doi. org/10.1111/j.1467-­8519.2011.01915.x. Reiter-Theil, Stella., and Jan Schürmann. 2016. The “Big Five” in 100 Clinical Ethics Consultation Cases - Reviewing three years of ethics support in two Basel University Hospitals. Bioethica Forum 9 (2). ———. 2018. Evaluating Clinical Ethics Support: On What Grounds Do We Make Judgments About Reports of Ethics Consultation? In Peer Review, Peer Education, and Modeling in the Practice of Clinical Ethics Consultation: The Zadeh Project, ed. S.G. Finder and M.J. Bliton, 165–178. Cham (CH). SAMS. 2013. Rechtliche Grundlagen im medizinischen Alltag  - Ein Leitfaden für die Praxis. Basel: SAMW. ———. 2017. Swiss Academy of Medical Sciences - Decisions on cardiopulmonary resuscitation. Schweizerisches Strafgesetzbuch (SR 311). Schweizerisches Zivilgesetzbuch (SR 210). Shekelle, Paul G., Richard L.  Kravitz, Jennifer Beart, Michael Marger, Mingming Wang, and Martin Lee. 2000. Are nonspecific practice guidelines potentially harmful? A randomized comparison of the effect of nonspecific versus specific guidelines on physician decision making. Health Services Research 34 (7): 1429–1448. Soar, Jasmeet, Michael W.  Donnino, Ian Maconochie, Richard Aickin, Dianne L.  Atkins, Lars W. Andersen, et al. 2018. 2018 International consensus on cardiopulmonary resuscitation and emergency cardiovascular care science with treatment recommendations summary. Circulation 138 (23): e714-e730. https://doi.org/10.1161/cir.0000000000000611

120

C. Wetterauer et al.

Strech, Daniel, and Jan Schildmann. 2011. Quality of ethical guidelines and ethical content in clinical guidelines: the example of end-of-life decision-making. Journal of Medical Ethics 37 (7): 390–396. https://doi.org/10.1136/jme.2010.040121. Tröhler, Ulrich, Eckhard Herych, and Stella Reiter-Theil. 1998. Ethics codes in medicine : foundations and achievements of codification since 1947. Winkler, Eva C. 2005. The ethics of policy writing: How should hospitals deal with moral disagreement about controversial medical practices? Journal of Medical Ethics 31 (10): 559–566. https://doi.org/10.1136/jme.2004.011023. Winkler, Eva C., Gian Domenico Borasio, Peter Jacobs, Jürgen Weber, and Ralf J.  Jox. 2012. Münchner Leitlinie zu Entscheidungen am Lebensende. Ethik in der Medizin 24 (3): 221–234. https://doi.org/10.1007/s00481-­011-­0150-­z. Charlotte Wetterauer  Jurist, Clinical Ethic Consultant, Clinical Ethics Unit, University Hospital of Basel, University Psychiatric Clinics Basel, Geriatric University Medicine FELIX PLATTER  Basel, Switzerland; [email protected]. Charlotte Wetterauer is a certified Clinical Ethics Consultant and Coordinator for Ethic Consultation (AEM). Particular interests comprise legal issues in clinical ethic consultations, e.g. decision making capacity, assisted suicide, pre-­implementation diagnostics, shared decision making and organizational ethics. Jan Schürmann  Clinical Ethicist, Clinical Ethics Unit, University Hospital of Basel, University Psychiatric Clinics Basel, Geriatric University Medicine FELIX PLATTER Basel, Switzerland; [email protected]. Jan Schürmann works primarily on issues in clinical ethics and bioethics. Particular interests include clinical ethics consultation, psychiatric ethics, and metaethics. Stella Reiter-Theil  Dr., Dipl.-Psych., Professor (emerita) for Medical and Health Ethics, University of Basel, Switzerland; [email protected]. Stella Reiter-­Theil is co-directing the ‘International Conference for Clinical Ethics Consultation’ (ICCEC: http://www.clinical-­ethics. org/) series; she is a certified trainer in ethics consultation (AEM) and a supervisor of several interdisciplinary research (PhD) projects, e.g. on preventive clinical ethics, challenges of ethical decision making in patient care, and evaluation of ethics support.

Chapter 7

Global AI Ethics Documents: What They Reveal About Motivations, Practices, and Policies Daniel S. Schiff, Kelly Laas, Justin B. Biddle, and Jason Borenstein

Abstract  In recent years, numerous organizations worldwide have produced normative documents identifying potential benefits, harms, and associated recommendations related to artificial intelligence (AI). This chapter examines why these AI ethics documents are being produced and what they can tell us about the motivations, practices, and policies that surround AI. While much of the literature to-date discusses whether consensus on ethical principles is emerging, critical unanswered questions remain around representation and power, the translation of principles to practices, and the complex set of reasons that underlie the creation of these documents. Our work brings attention to these underexplored issues through a comprehensive literature review, and by proposing a novel typology of motivations that helps to characterize the creation of AI ethics documents. Finally, drawing on the recent case of gene-editing ethics documents, we argue that AI ethics stakeholders can achieve more beneficial impacts for society by fostering more diverse and inclusive participatory processes. Keywords  Artificial intelligence · Ethics codes · Technology governance · Organizational motivations

D. S. Schiff (*) · J. B. Biddle · J. Borenstein School of Public Policy, Georgia Institute of Technology, Atlanta, GA, USA e-mail: [email protected]; [email protected]; [email protected] K. Laas Center for the Study of Ethics in the Professions, Illinois Institute of Technology, Chicago, IL, USA e-mail: [email protected] © Springer Nature Switzerland AG 2022 K. Laas et al. (eds.), Codes of Ethics and Ethical Guidelines, The International Library of Ethics, Law and Technology 23, https://doi.org/10.1007/978-3-030-86201-5_7

121

122

D. S. Schiff et al.

7.1  Introduction Since 2016, numerous organizations worldwide have produced documents identifying the potential benefits and harms of artificial intelligence (AI). Documents that focus on the ethical aspects of AI have taken many forms, including codes of ethics, normative guidelines, and policy strategies. Such documents differ from traditional scholarly publications in that they often represent official viewpoints of the authoring organizations. This development is largely in response to the profound impacts that AI technologies are expected to have on human life. As such, the AI ethics documents typically reflect on AI’s benefits and potential harms, offer ethical principles to minimize risks, and in some cases, include recommendations that could be realized through internal change or external influence. These normative documents provide us with an opportunity to understand how influential and, in some cases, politically powerful entities and global thought leaders imagine AI’s impacts and how they intend to shape them. For this reason, AI ethics documents are valuable sources of information and important objects of study. In this chapter, we seek to examine why AI ethics documents are being produced and what they suggest about the motivations, practices, and policies that surround AI.  While much of the current literature discusses whether consensus on ethical principles is emerging, critical unanswered questions remain around representation and power, the translation of principles to practices, and the complex set of reasons that underlie the creation of these documents. Our work seeks to contribute by bringing attention to these underexplored issues through a comprehensive literature review, and by proposing a novel typology of motivations that helps to characterize the creation of AI ethics documents. Our examination suggests that AI ethics documents are likely to play an important – if complex – role in shaping future practices, norms, and regulations surrounding AI. After reviewing the recent history surrounding AI ethics documents in Section 2, we summarize the recent literature on AI ethics documents in Section 3 and briefly describe our own study in Section 4. Section 5 produces a typology to examine the multiple motivations of organizations that are producing AI documents. In light of these motivations, Section 6 considers characteristics that are likely to make AI documents more effective at reaching the goals that they are trying to achieve. In Section 7, we examine regulatory and other responses to the ethical and social risks surrounding gene editing as a way of providing insight into the possible future of AI ethics documents. Section 8 concludes the discussion.

7.2  The New AI Spring and the Codification of AI Ethics Since the 1950s, there have been several waves of interest in AI; the 2010s have marked the beginning of a new ‘AI Spring’ – a period of increased funding, research, development, and public attention. Venture capital, publications, patents,

7  Global AI Ethics Documents: What They Reveal About Motivations, Practices…

123

conference attendance, and employment in this field have grown substantially. Estimates indicate that AI’s economic impact will be in the trillions (PricewaterhouseCoopers 2017), and many multinational ‘Big Tech’ companies have reorganized their operations away from ‘mobile-first’ and ‘cloud-first’ to ‘AI-first.’ Some authors even claim that the AI age will be heralded as the most important economic and social transformation in recent human history, a general-­ purpose technology with sweeping impacts across human society and the key to the “Fourth Industrial Revolution” (Villani et al. 2018; Schwab 2016). The tremendous excitement for AI is largely the result of technical advances in computer science, specifically natural language understanding and generation, image recognition, and search optimization, among other domains. These advances are themselves the result of two key developments: 1) increased processing power that made feasible the application of algorithms for deep neural networks; and 2) massive increases in the availability of ‘big data,’ including from online shopping, search, and social media sources (Duan et  al. 2019). The movement towards the digitization of information, including health records, has also been an important driving factor (Mai 2016). At the same time, a suite of ethical, legal, policy, and social concerns have emerged in relation to AI, which are increasingly drawing the attention of scholars, practitioners, policymakers, and the public. Debates are reawakening, for example, about the role of automation in replacing human labor and which work sectors are most vulnerable to displacement (Frey and Osborne 2017). While supporters of AI admit that some jobs will be lost to ‘creative destruction’, they contend that net-­ positive benefits will emerge for job creation and economic growth (McKinsey Global Institute 2018). Even if their assertions are correct – and they might not be – questions must still be addressed about the distribution of these impacts across different populations (West 2018). In addition to concerns about job displacement, the capacity of facial recognition and big data more generally to enable widespread surveillance, micro-targeting, and digital manipulation exacerbates traditional concerns about privacy and autonomy and raises new facets of these concerns (Bennett and Raab 2017). The capacity of algorithms to reflect and reproduce societal biases, such as when deciding who should be eligible for a bank loan or job opportunity, and to do so without sufficient transparency or public scrutiny, brings to light the risks of placing increasingly weighty decisions into the hands of machines. Concern is growing stronger that some decisions with profound legal implications might be handed over to AI (Scherer 2016). In many sectors of human life ranging from AI in healthcare (Char et al. 2018) to autonomous vehicles (Bagloee et al. 2016), the risks – some of which may not be entirely foreseen at this point – must be juxtaposed against the transformative potential of this powerful set of technologies. Many scholars have sought to identify and parse ethical issues pertaining to AI. Acronyms such as FEAT, FAT, or FATE, referring to some combination of fairness, ethics, accountability, and transparency, have become mainstream in academia, including through ACM’s Fairness, Accountability, and Transparency in Machine Learning (FAccT)* or AAAI’s AI, Ethics, and Society conferences.

124

D. S. Schiff et al.

Normative AI documents represent one category of attempts by key organizational actors – governments, corporations, NGOs, and others – to grapple with this balancing act. Organizations have pursued multiple avenues outside of academic scholarship and the creation of ethics documents to reflect on AI’s benefits and risks. They often make recommendations for change through regulatory strategies, best practices, or other means. Some companies and universities have created new in-person and online courses on AI ethics (Gartenberg 2018; Vincent 2019). Organizational and technical frameworks for engaging in responsible AI development have also begun to appear (Chatila and Havens 2019; The Institute for Ethical AI & ML 2020). Governmental task forces and cooperatives have emerged to consider evidence and possible regulatory action (Automated Decision Systems and Task Force 2019; OECD 2019) along with specialized business units and governance boards (Fast Company 2017; Todd 2019). In addition, new organizations and collaborations have recently been launched with the mission of addressing AI ethics, such as AI Now and the Wadhwani Institute. Finally, many organizations have crafted various ethics and policy documents, which is the focus of this chapter.

7.3  A Review of Research Studies on AI Ethics Documents Scholars had begun to examine normative AI documents through a variety of approaches, including qualitative content and thematic analysis (Floridi and Cowls 2019) and quantitative text analysis (Zeng et al. 2018). Table 7.1 summarizes some of the existing meta-analytic research on normative AI documents: The studies in Table 7.1 pay the most attention to the following topics or themes: 1) to what degree there is consensus on which ethics topics are the most important to examine; 2) how existing documents reflect issues of representation and power, and 3) what the goals of the documents are and whether they are effective at generating change. We examine each item below, noting the state of recent scholarly discussions, how our work potentially contributes, and what current knowledge gaps are.

7.3.1  Consensus on Ethical Topics Nearly all of the aforementioned analyses engage in an inductive search for ethics categories. They ultimately identify a small number of core principles or themes (typically between five and ten), such as accountability, transparency, beneficence, justice, and explainability. The ethical principles are parsed and categorized in a variety of ways to simplify and compare across documents. For example, Floridi and Cowls (2019) map 47 AI ethical values onto five core principles (beneficence, non-maleficence, autonomy, justice, and explicability), while Zeng et  al. (2018) articulate ten themes.

7  Global AI Ethics Documents: What They Reveal About Motivations, Practices…

125

Table 7.1  Review of meta-analytical research on normative AI documents Study Cath, C., Wachter, S., Mittelstadt, B., Taddeo, M., & Floridi, L. (2018). Artificial Intelligence and the ‘Good Society’: The US, EU, and UK approach.

# of documents Method of analysis 3 Comparative analysis of 3 governmental strategies (US, UK, and European Parliament)

Daly, A., Hagendorff, T., Li, H., Mann, M., Marda, V., Wagner, B., Wang, W., & Witteborn, S. (2019). Artificial Intelligence, Governance, and Ethics: Global Perspectives.

16

Dutton, T., Barron, B., & Boskovic, G. (2018). Building an AI World: Report on National and Regional AI Strategies. CIFAR.

18

Key findings Finds that transparency, accountability, and positive impact are shared values of a ‘good AI society,’ as well as cooperation across sectors. There remains divergence in nature of shared responsibility, specific ethical values, and how a broad vision is to be implemented in a certain kind of society. Comparative overview of Finds transparency, accountability, and privacy ethics initiatives as common principles, and (primarily documents) identifies missing issues from the perspective of like hidden human and countries, with several energy costs. intergovernmental Discusses the importance of bodies, NGOs, and competition and corporations discussed collaboration, public and international engagement, and considers challenges in implementing and enforcing the principles. Reviews 18 national AI Maps documents’ strategies discussion of research, AI talent, future work, industrial strategy, ethics, data, AI in government, and inclusion. Also reviews current status of each national strategy and funding as of 2018. (continued)

126

D. S. Schiff et al.

Table 7.1 (continued) # of Study documents Method of analysis 36 Coding of 36 high-profile Fjeld, J., Achten, N., sets of principles and Hilligoss, H., Nagy, A., & inductive thematic Srikumar, M. (2020). analysis and frequency Principled Artificial mapping of eight themes Intelligence: Mapping and components of each Consensus in Ethical and Rights-Based Approaches to Principles for AI (SSRN Scholarly Paper ID 3518482). Berkman Klein Center for Internet & Society.

6

Comparative analysis of high-profile sets of principles from NGOs and government entities

Gibert, M., Mondin, C., & 7 Chicoisne, G. (2018). Montréal Declaration of Responsible AI: 2018 Overview of International Recommendations for AI Ethics (pp. 78–97). University of Montréal.

Comparative analysis of ethical concepts and recommendations in seven reports, typology based on citizen recommendations and Montréal Declaration categories

Floridi, L., & Cowls, J. (2019). A Unified Framework of Five Principles for AI in Society.

Key findings Identifies eight themes: privacy, accountability, safety and security, transparency and explainability, fairness and non-discrimination, human control of technology, professional responsibility, and promotion of human values. Notes whether documents have reference to human rights, argues that there is a general and growing consensus, and suggests that principles are unlikely to be effective without a larger governance ecosystem. Identifies 47 ethics principles and maps these to four core principles from bioethics (beneficence, non-maleficence, autonomy, injustice) plus an additional principle – explicability. Notes a lack of geographic, cultural, and social diversity in these documents. Discusses seven ethical concepts: well-being, autonomy, justice, privacy, knowledge, democracy, and responsibility. Notes a divergence between public sector and private sector documents in terms of where solutions should be applied. Identifies an overall convergence in ethical concepts. (continued)

7  Global AI Ethics Documents: What They Reveal About Motivations, Practices…

127

Table 7.1 (continued) Study Greene, D., Hoffmann, A. L., & Stark, L. (2019). Better, Nicer, Clearer, Fairer: A Critical Assessment of the Movement for Ethical Artificial Intelligence and Machine Learning.

# of documents Method of analysis 7 Frame analysis to identify second-order ethical grounding assumptions

Hagendorff, T. (2020). The Ethics of AI Ethics: An Evaluation of Guidelines. Minds and Machines.

21

Jobin, A., Ienca, M., & Vayena, E. (2019). The global landscape of AI ethics guidelines.

84

Key findings Identified higher-order themes, including a focus on expert oversight, deterministic assumptions of AI progress, assignment of ethical responsibility to designers instead of others, and the use of public engagement to confer legitimacy. Identifies accountability, Qualitative mapping of privacy, fairness as most topics and frequencies and comparative analysis prominent values, and existential threats, machine consciousness, and social cohesion as omissions. Reviews issues like business vs. ethics interests, US vs. China competition, lack of effectiveness of ethics codes, and recommends virtue ethics over deontology. Identifies transparency, Qualitative content justice and fairness, analysis and code non-maleficence, mapping responsibility, and privacy as converging topics, and sustainability and solidarity as underrepresented. Reviews issues like precedence of non-­ maleficence over beneficence, differences in how principles are interpreted, and lack of clarity around implementation. (continued)

128

D. S. Schiff et al.

Table 7.1 (continued) Study Mittelstadt, B. (2019). Principles alone cannot guarantee ethical AI.

# of documents Method of analysis NA Reflects on other meta-analyses, noting that at least 84 ethics documents exist. Contrasts AI ethics with medical ethics

NA Morley, J., Floridi, L., Kinsey, L., & Elhalal, A. (2019). From What to How: An Initial Review of Publicly Available AI Ethics Tools, Methods and Research to Translate Principles into Practices.

Zeng, Y., Lu, E., & Huangfu, C. (2018). Linking Artificial Intelligence Principles.

27

Key findings Argues that “AI development lacks (1) common aims and fiduciary duties, (2) professional history and norms, (3) proven methods to translate principles into practice, and (4) robust legal and professional accountability mechanisms.” (p.1) Offers recommendations including bottom-up ethics, licensure, shifting to business ethics, and approaching ethics as a process, not solution. Identifies existing tools/ Creates a typology, methodologies to apply AI mapping publicly ethics principles in AI available AI ethics development, design, design tools/ training, deployment, etc. methodologies to five Highlights the emphasis in core ethical principles (Floridi and Cowls 2019) the available tools/methods on explicability, and inability to assess individual impacts, and the lack of guidance towards using the tools/methods. Defines keywords for 10 Quantitative content analysis to identify topic manually-chosen themes: and keyword frequencies Humanity, Collaboration, Share, Fairness, Transparency, Privacy, Security, Safety, Accountability, AGI/ASI. Notes some sector differences, like a corporate focus on collaboration rather than security and privacy. Recommends a focus on safety, AGI, and societal transformation.

The studies investigate whether a global consensus is emerging on which AI ethics topics are (or at least appear) most worthy of attention. Most of the studies conclude that there is, though the specific framing and parsing of ethics concepts differ by document. This can lead to a messy proliferation of somewhat interrelated ideas. As a result, Morley et al. (2019) describe the consensus on ethics topics as “fragile.”

7  Global AI Ethics Documents: What They Reveal About Motivations, Practices…

129

Nevertheless, the degree to which similarities are appearing across sectors and regions of the world is noteworthy. While this may be a positive step towards global harmonization and governance, a legitimate concern persists that underrepresented groups, such as people in poorer regions of the world, might not have a true voice in this process (Fejerskov 2017). The studies also highlight neglected topics and differences across sectors or regions. For example, Hagendorff (2020) argues that the erosion of social cohesion and potential existential threats related to AI are underemphasized in AI ethics documents. Daly et al. (2019) note the lack of attention to the hidden ethical costs of AI, including energy usage and human labor. Zeng et al. (2018) comment on differences in focus between the public and private sectors. Gibert et al. (2018) argue that different stakeholders are responsible for addressing particular AI ethics challenges. However, the discussion of neglected topics in the AI ethics realm is relatively underdeveloped, as most attention is paid to identifying shared themes and consensus. More research should be done, for example, on whether the lack of attention to particular topics in normative AI documents leads to a failure to address those topics (e.g., by failing to seek regulatory solutions or to develop technical or governance strategies). Additional study should also examine whether differences across sectors or regions reflect power dynamics, as organizations attempt to frame problems and solutions in light of their own interests. Our research, discussed in this chapter and elsewhere (Schiff et al. 2021), is helping to fill this gap by focusing on topical omissions as well as variances across three organizational sectors (public, private, and NGO).

7.3.2  Representation and Power Some studies address issues of representation by examining who writes or participates in the development of AI ethics documents. For example, Floridi and Cowls (2019) note a lack of geographic, cultural, and social diversity in the documents. Daly et al. (2019) question whether academic experts and civil society groups are sufficiently represented. Hagendorff (2020) notes the relatively low number of women amongst document authors and argues that AI ethics principles appear to be molded mostly by men. Greene et al. (2019) challenge prevailing assumptions that they argue underlie discourse surrounding AI, such as the sense that AI innovation will proceed deterministically, with little room for humans to shape AI’s trajectory. They suggest that the focus of private sector documents on expert oversight and their assignment of ethical responsibility to designers serves as a means to avoid scrutiny of higher-level business decision-makers and economic systems. Furthermore, they suggest that public engagement in creating AI ethics principles is aimed at sanitizing or rubberstamping conceptual frames and strategies that favor experts, elites, and the systems they prefer.

130

D. S. Schiff et al.

Based on the state of the current literature, we offer several observations. First, the fact that nearly all public and private sector AI ethics documents and national policy strategies are coming from high-income and powerful countries and multinational corporations is a serious concern (Kak 2020; Schiff et al. 2020). Admittedly, since much of the scholarly literature focuses on English-language documents, some items may have been missed. Yet even if that is the case, low-income countries are still underrepresented. Ethics and policy documents that adopt a national or regional perspective (e.g., the European Union) run the risk of articulating ethical issues through a narrow lens. For example, a country that directs its attention toward alleviating inequality domestically may pursue investment and innovation strategies that exacerbate inequality in low-income countries (Osoba 2020). In sum, the lenses of the leading countries and organizations are likely to shape the financial, developmental, and regulatory aspects of AI. Because the Global North seems to dominate the current conversation, fields such as international development, postcolonial studies, and others can provide contrasting perspectives that can and should inform AI ethics and policy. Along related lines, research that examines the role of the public in shaping AI ethics and policy needs to be reframed to become more inclusive so that it more fully takes into account a broader range of perspectives on gender, race, and socioeconomic factors. Moreover, research on participatory processes and the role of the public versus experts in shaping decision-making can scrutinize and inform the trajectory of AI ethics and governance.

7.3.3  Principles to Practice Most of the aforementioned studies are also concerned with the goals of the documents, the motivations of the actors producing the documents, and whether and how normative documents can effectively lead to changes in practice. Some of the studies suggest that normative AI documents are not really intended to produce substantive change; rather, the intent is to improve the public image of organizations and/or protect them from public scrutiny. Hagendorff, for example, states that in the context of AI, “ethical considerations are mainly used for public relations purposes” (2020, 11). Hagendorff cites Boddington (2017, 56) in support of this claim, though to be fair, Bodington merely states that there is “a danger” that codes of ethics might only serve public relations purposes. Others question whether broad ethical principles can effectively change practice, even if they are intended to do so (e.g., Cath et  al. 2018; Hagendorff 2020). Hagendorff also cites McNamara et al. (2018) as evidence that ethics codes have virtually no effect on practice. In the study by McNamara and colleagues, 63 software engineering students and 105 professional software developers were given software-related scenarios and asked questions about ethical decision-making. Study participants who received a copy of the Association for Computing Machinery (ACM) code of ethics did not exhibit statistically significant differences in their

7  Global AI Ethics Documents: What They Reveal About Motivations, Practices…

131

decision-making in 11 ethical scenarios compared to participants in the control group who did not receive a copy of the ACM code. While the findings from the McNamara et al. study may seem troubling, they do not necessarily prove that ethics codes lack value. Yet, Fjeld et al. (2020) suggest that ethical principles are only “gently persuasive” unless they are enmeshed in governance structures. Similarly, Daly et  al. (2019) note challenges in enforcing ethical principles for AI, while Jobin et al. (2019), Mittelstadt (2019), and Morley et al. (2019) argue that guidance and tools to address AI ethics issues are not sufficiently available and developed. Furthermore, many question the goals and motivations of AI organizations and raise concerns about “ethics washing,” especially with regard to corporations. On this perspective, organizations may lack even the fundamental motivations to carry out effective change. Yet, it is notoriously difficult to determine motivations, particularly if an entity has a complex organizational structure (Abebe et  al. 2020; Bietti 2020). A variety of research methods (document review, interviews, observations, text analysis) and theoretical perspectives should be applied to better understand organizational motivations. Moreover, even when it is clear that an organization’s motivation is to translate ethical principles into practice, it is challenging to measure efficacy in achieving this goal. Morley et al. (2019) have done systematic work on mapping tools and methodologies to address ethical issues in the AI-development pipeline, but existing tools and methodologies, including governance structures and processes of evaluation, require further advancement. The question of how to translate principles to practice should be a top priority for all AI stakeholders (Schiff et al. 2020, 2021). Our work described in the next section may contribute to this priority area by mapping AI ethics documents’ levels of engagement with law and regulation as one proxy for how seriously they may be thinking through the implementation of ethical considerations.

7.4  Building on the AI Ethics Literature In an ongoing project, we are examining a collection of documents that, in their entirety or at least in part, seek to identify ethical issues and/or make recommendations about ethical practice related to AI. Our collection consists of more than 110 normative AI documents across 25 countries from the public, private, or NGO sectors. By “public sector,” we mean that the authoring organization is connected to a governmental entity. “Private sector” refers to corporate entities such as Microsoft or Tencent. The “NGO sector” includes non-profits, professional organizations such as IEEE, and hybrid entities that involve collaborations across sectors such as the Partnership on AI. To fit within our inclusion criteria, documents must be publicly available, published between 2016 and July 2019, and have an English-language version. The collection of documents includes frameworks, policy strategies, and

132

D. S. Schiff et al.

reports with ethics sections. It does not include traditional academic publications or single-authored opinion articles. We included documents addressing AI or similar terms, such as machine intelligence, machine learning, and automation, that have at least some ethics component. AI ethics intersects with robot ethics (e.g., concerns related to social robots and military robots), and robot ethics has a long and rich history. However, we excluded documents on robot ethics as belonging to a different realm of discourse unless they address AI to a significant degree. We also excluded documents that focus narrowly on a single category of technology, such as autonomous vehicles or military robots. Of 224 potential documents identified, our final sample of 112 consists of 54 from the public sector, 26 from the private sector, and 32 from the NGO sector. Our data search and inclusion process and the methodologies used to study the frequency of ethical topics across public, private, and NGO sectors are detailed in other work (Schiff et al. 2020, 2021). In general, the documents we analyzed are not codes of ethics in its narrow/formal definition. In that sense of the term, a code of ethics is a set of moral principles or standards designed to guide the behavior of the members of a particular organization or profession. According to Davis (2013), a code of ethics applies “...to participants in a legitimate voluntary activity.” Codes of ethics usually seek to achieve one or more of the following: provide authoritative rules or guidance for individuals new to the profession, remind experienced members of the ethical standards they are expected to uphold, and call attention to new areas of concern. They also can serve as a framework for resolving disputes among members about what constitutes ethical practice (Davis 2015). Moreover, they help individuals outside of the group (for example, the public) to calibrate their expectations about those who are within the group (for example, physicians in the case of the AMA) (Davis 2013). In contrast to formal codes of ethics, many of the documents we reviewed do not have specific or directly defined target audiences. In addition, many documents identify clusters of ethical issues as being important, without necessarily articulating rules or standards for behavior. While the documents we reviewed are not limited to formal codes of ethics, they do share many of the characteristics identified by Davis (2015). They exist alongside other ethics-related efforts, sometimes serving as the motivation for new activities and sometimes as their consequence. These documents also can be classified as "practical" or "institutional" ethics documents, insofar as they apply to individuals or organizations engaged in developing, utilizing, or governing a specific activity, in this case, AI (Davis 2015). Furthermore, the majority of these documents articulate ethical standards that developers, practitioners, and users of AI should follow that go beyond what common morality demands, and they push their audiences to consider how these standards should play out in both the current and future uses of these new technologies. While the documents analyzed here do not encompass the totality of current AI ethics efforts, we believe they are a valuable distillation of organizational perspectives and activities from around the world. Informed by our review of the AI ethics literature, we extrapolated a set of six main organizational motivations that may underlie the creation of AI ethics

7  Global AI Ethics Documents: What They Reveal About Motivations, Practices…

133

documents (Schiff et al. 2020). In the next section, we introduce our typology of motivations; the typology challenges the notion that motives can be simply sorted in a binary of “sincere” or “disingenuous.” Instead, it seeks to move towards a more nuanced understanding of the reasons why organizations are engaging in AI ethics initiatives.

7.5  A Typology of Motivations On the basis of our review of global AI ethics documents and the existing literature on these documents, we concluded that it is important to develop a typology of motivations that could reveal insights about an organization’s target audiences and goals, as well as illuminate the purposes of AI ethics documents and their prospects for success. As noted in Section 3, a number of commentators have argued that AI ethics documents are largely exercises in public relations and “ethics washing.” While public relations is certainly a factor in the creation of some ethics documents, it is not the only one, and getter clearer about the range of possible motivations and goals can help us to understand better what these documents might help to achieve. For any given AI ethics document – or ethics document more generally – ascertaining the goal of the document or the motivations of those who produce it is challenging. Stated motivations can differ from actual ones, and actual motivations are not always clear potentially even to those who develop a document. Goals and motivations can overlap and, in some instances, conflict with one another. In the case of complex organizations such as governments or Big Tech firms, different divisions might have conflicting goals. Throughout our research, we developed a typology of six different types of motivations that we believe to be operative in organizations that produce AI ethics documents. The six motivation-types can be clustered into three pairs that are conceptually distinct and potentially overlapping. Moreover, the two motivations within each pair are also not mutually exclusive. The first pair addresses end goals, the second pair addresses strategies for achieving those end goals, and the third focuses on perception and public relations. Thinking of these motivations as ideal types or constructs that are only partially instantiated in practice may be helpful in understanding behavior (Weber 1949) (Table 7.2).

7.5.1  Motivations One and Two: Goals The first two motivation-types are social responsibility and competitive advantage. The former is the motivation to promote social benefits and reduce the risk of harm, while the latter is the motivation to gain or increase an advantage (e.g., economic or political) over others. Both of these connect to achieving particular goals, and the two types of motivations are in many cases consistent with one another, such that an organization could be motivated by either or both simultaneously. Furthermore,

134

D. S. Schiff et al.

Table 7.2  Typology of motivations Motivation types Social Responsibility—the motivation to promote social benefits and reduce the risk of harm Strategies Strategic Planning—the motivation to aid with internal strategic planning or organizational change Goals

Signals

Competitive Advantage—the motivation to gain or increase an advantage (e.g., economic or political) over others Strategic Intervention—the motivation to intervene in the surrounding (external) environment, including the legal and regulatory environment Signaling Leadership—the motivation Signaling Social Responsibility—the motivation to be perceived to be promoting to be perceived to be a leader in the social benefits and reducing risk of harm, field of AI, or to be perceived to have a particular sort of competitive advantage whether or not one is actually doing so

even though it can be difficult to determine in every individual case which of these motivations of operative, both types of motivations are helpful in understanding the production of normative AI documents. Those who draw attention to ethics washing are concerned that the production of ethics documents may be solely or predominantly tied to achieving a competitive advantage (Johnson 2019), and these worries are perhaps justified with regard to some organizations or governmental entities. Yet, many ethics documents seem to be trying to promote societal good (Bietti 2020). For example, IEEE’s Ethically Aligned Design (2019) states that its goal is to align AI to “values and ethical principles that prioritize human well-being in a given cultural context,” and SAP’s Guiding Principles for Artificial Intelligence (2018) describes SAP’s motivation “to help the world run better and improve people’s lives” It is difficult to explain the production of documents such as these, the Montreal Declaration for Responsible AI (2018), and the European Commission’s Ethical Guidelines for Trustworthy AI (2019) without reference to some level of concern for social responsibility.

7.5.2  Motivations Three and Four: Strategies Motivations three and four are strategic planning and strategic intervention. Regarding the former, an organization produces a normative AI document in order to aid with internal strategic planning or organizational change. For example, a corporation could develop an ethics document to serve as a foundation for best practice guidelines that influence the norms, policies, and procedures for its labs or the culture of its workplace, or a government could produce a normative AI document to serve as a blueprint for a national AI strategy. Many countries, including France, Mexico, and Qatar, have produced documents for this stated purpose (British Embassy in Mexico, Oxford Insights, and C Minds 2018; Villani et al. 2018; Qatar Center for Artificial Intelligence 2019). For example, Qatar’s Blueprint National Artificial Intelligence Strategy expresses a motivation to “identify the key pillars to

7  Global AI Ethics Documents: What They Reveal About Motivations, Practices…

135

build a great AI research and innovation ecosystem in Qatar and follow those with recommendations for action.” Regarding motivation four, an organization develops a document in order to intervene in the surrounding (external) environment, including the legal and regulatory environment. A firm could produce a document as a part of a strategy of intervening in the regulatory environment to gain an economic advantage, such as by blocking government regulation through the promise of voluntary self-regulation. For example, Microsoft’s The Future Computed (2018, 9–10) suggests that while regulation is important, it will take “more than a couple of years… but almost certainly less than two decades.” According to the Microsoft report, “AI technology needs to continue to develop and mature before rules can be crafted to govern it.” We refer to these two motivation-types as strategic because they are, to a significant extent, means for effecting broader ends – in particular, the broader ends of social responsibility and/or competitive advantage. As is the case with the first pair of motivation-types, motivations three and four are largely independent of one another, in that, an organization could pursue either or both simultaneously. Furthermore, this second pair is orthogonal to the first, in that either type in the second pair could be adopted in pursuit of either type in the first pair. A firm could be motivated to intervene in the regulatory environment in order to gain an economic advantage, but it could also do this out of a genuine concern for social responsibility.

7.5.3  Motivations Five and Six: Signaling Motivations five and six are what we call signaling social responsibility and signaling leadership. The first of these is the motivation to be perceived to be promoting social benefits and mitigating risks – whether or not one actually is doing so. Some organizations might desire to signal social responsibility even if they are not genuinely motivated by social responsibility. Alternatively, some organizations might be motivated to both signal and promote social responsibility. Indeed, they might reasonably believe that signaling their own commitment to social responsibility will actually generate social benefits by encouraging others to act responsibly as well. In signaling leadership, an organization is motivated to be perceived to be a leader in the field of AI – or, to put it differently, motivated to be perceived to have a particular sort of competitive advantage. An organization might signal leadership in order to expand markets, improve its reputation, or gain a seat at the planning table. This might be important for countries that are attempting to attract investment, firms that are trying to expand, or NGOs that are seeking influence in policy-­ making; it is particularly crucial for those that wish to have a leadership role but are not already perceived to have such a role. The AI ethics documents of the governments of Australia, India, Mexico, New Zealand, and Tunisia all reference the documents created by other governments, and they describe their own documents as entryways into this more established group. For example, Mexico’s document

136

D. S. Schiff et al.

(2018) states that it is the “first nation in Latin America to join this elite club” and expresses pride in being “one of the first ten countries in the world to deliver a National Strategy for AI.” Qatar (2019) expresses explicitly that its “vision is to have AI so pervasive in all aspects of life, business and governance in Qatar that everyone looks up to Qatar as a role model.” However, signaling could be an exercise in ethics washing. Roughly stated, the concept refers to the practice of trying to appear to be ethical but not performing behaviors that are consistent with ethical practice. Within the AI context, Johnson (2019) describes ethics washing as “...the practice of fabricating or exaggerating a company’s interest in equitable AI systems that work for everyone.” Google is one of a number of prominent companies that has been accused of this behavior, in part because its AI ethics board allegedly had no real veto power and was quickly abandoned after public criticism of its composition (Hao 2019).

7.6  Efficacy In light of this set of motivations, this section looks at the key question of the efficacy of AI documents; in other words, do the documents generate the sorts of changes that their authors intend? One metric of the efficacy of such documents is whether they contribute to the modification of internal policies or practices within an organization. This could take the form, for example, of recommending, and then later implementing, a new ethics review process internal to a company that scrutinizes the AI systems the company develops. For example, SAP (2018) was among the first companies to propose an “AI Ethics Steering Committee” and “AI Ethics Advisory Board” and Microsoft has proposed designation of internal “Responsible AI Champions,” a strategy that is now being emulated by the U.S. Department of Defense’s Joint Artificial Intelligence Center (Barnett 2020; O’Brien et al. 2020). Another metric is whether a document helps to generate change external to the authoring organization. Within a document, a company might try, for example, to convince a government to develop a new law or regulation or revise the mission of a government agency. Intel’s Artificial Intelligence: The Public Policy Opportunity recommends governments remove “barriers to the access of data,” “identify and mitigate discrimination caused by the use of AI,” and “encourage investment in AI R&D” (Intel 2017). The Information Technology Industry Council (2017) recommends the creation of public-private partnerships and expanded efforts to improve STEM education and workforce training and adjustment programs. A different potential target of the documents is research practices within academia or industry. The Institute of Business Ethics (2018), for instance, encourages organizations to “Establish a multi-disciplinary Ethics Research Unit to examine the implications of AI research and potential applications.” Along related lines, many of the documents voice a call to redesign the computing curriculum or the educational system more generally. A report by the Future of Humanity Institute and other collaborators states that “Educational efforts might be beneficial in

7  Global AI Ethics Documents: What They Reveal About Motivations, Practices…

137

highlighting the risks of malicious applications to AI researchers, and fostering preparedness to make decisions about when technologies should be open, and how they should be designed, in order to mitigate such risks” (Brundage et al. 2018). It is difficult to establish direct causal links between specific AI documents and the renewed push for ethics education in the computing curriculum, but changes are certainly happening. For instance, the Mozilla Foundation (2018) is sponsoring the Responsible Computing Science Challenge, which aims to “unearth and spark innovative coursework.” An overarching hope is that the documents are contributing to a culture shift in terms of computing fields taking ethics more seriously. This, in part, would involve more fully integrating ethical considerations into computing research, design, and implementation. A variety of factors may shape whether documents achieve these internal or external changes. We posit that documents are more likely to generate tangible impacts if: they engage with issues of regulation and policy; articulate their goals and strategies in detail rather than superficially; include participatory and public engagement in the document’s creation; encourage mechanisms for monitoring and enforcement, and have plans for iteration and follow-up.

7.7  Lessons from CRISPR What can be accomplished through the creation and distribution of AI ethics documents remains to be determined. At least some of the drafting authors and organizations likely hope for substantive changes to industry practices and meaningful regulations pertaining to AI: yet it is far from clear what will transpire. Here, recent history may be instructive, as many of the previous pushes to create ethics documents also occurred in response to emerging technologies, such as nuclear energy and recombinant DNA.  What resulted from these initiatives could provide some guideposts in terms of what the legacy and impact of AI documents might be and can help direct the attention of AI stakeholders on what pitfalls to avoid. One such guidepost surrounds the need for diverse and public participation in how ethical consensus is shaped. To illuminate this point, we will focus on a relatively recent example, the use of CRISPR, clusters of regularly interspaced short palindromic repeats, to edit human embryos. The case of CRISPR represents a clear instance in which a significant global consensus emerged on the need to ban the use of germline gene editing. This consensus resulted from a concerted effort to articulate key ethical principles relevant to this technology and to involve key stakeholders both in the articulation and use of these principles. Though there was a failure to prevent the Chinese scientist He Jiankui from using CRISPR to edit viable human embryos, the global scientific community collectively condemned He’s actions and called for stronger international guidelines and oversight for this technology. This suggests that the gene editing community has progressed in the articulation of meaningful regulations and practices.

138

D. S. Schiff et al.

Guidelines governing the modification of the human genome, such as the Council of Europe’s Convention on Human Rights and Biomedicine from 1997, have been in existence for more than 20 years. Concerns around CRISPR specifically started to intensify in the 2010’s as researchers began to use the technique to edit both human somatic cells and human germline cells, as well as to potentially eliminate genetic diseases (Evitt et  al. 2015; Ledford 2020). Many ethics documents were generated in response, including statements put out by the organizing committee of the International Summit on Human Gene Editing (2015, 2017), the Council of Europe’s Committee on Bioethics “Statement on Genome Editing Technologies” (2015), the National Academies’ “Human Genome Editing: Science, Ethics and Governance” (2017), and the Alliance for Regenerative Medicine’s “Statement of Principles on Genome Editing” (2019). Documents on AI and gene editing have many similarities. Like AI ethics documents (Floridi and Cowls 2019), the authors of these guidelines drew heavily on existing bioethical norms and standards, such as the Universal Declaration on the Human Genome and Human Rights, and ethical principles pertaining to human subjects research. Fundamental principles of human rights, safety, social justice, and an emphasis on the need for expanded social discussion and debate about the ethical issues involved in gene editing are standard throughout these guidelines and statements (Brokowski 2018). The motivations behind these statements are also similar to those found in the AI guidelines, including discussions of social responsibility and leadership, as well as efforts to signal social responsibility and change external perceptions.1 Given these similarities, it is instructive to consider whether ethics documents surrounding gene editing led to meaningful change. Of note, modification of the human germline is now banned in most countries. According to a 2014 survey, regulations had been enacted by 39 different countries, with 29 having an outright ban on gene modification, 9 with ambiguous regulation, and the remaining country, the USA, severely regulating its use in clinical trials (Araki and Ishii 2014). These regulations align with mainstream sentiments in ethics documents produced on this topic, according to work by Brokowski (2018). Brokowski examined 61 ethics statements by governments, non-governmental organizations, and private companies from 2015–2018: 65% of the statements indicated that the clinical use of germline editing should be impermissible at this current time, 30% expressed no clear stance on its permissibility, and only 5% of the reports expressed openness to further exploring the possible applications of gene editing technologies in this area. What led to this relative clarity of purpose? Foremost, the CRISPR case seems to emphasize the importance of having a robust participatory process. For example, the guidelines developed by international conferences such as the U.S.  National Academies’ international conference in 2015 and the International Summit on Human Gene Editing in 2015 and 2018 represent international gatherings of experts

1  For examples, see documents by Merck (2017) and the Organizing Committee of the Second International Summit on Human Genome Editing (2018).

7  Global AI Ethics Documents: What They Reveal About Motivations, Practices…

139

that produced guidelines critical to the future ethical use of CRISPR.  The 2018 International Summit, held only a few days after the announcement that Dr. He Jiankui had edited viable human embryos, is especially striking in its involvement of 500 individuals; it included not only researchers but ethicists and patient group representatives as well. As Jasanoff, Hurlbut, and Saha (2015)  stated in their article on lessons to be learned for deliberations around the ethics of CRISPR, “…studies of technical controversies have repeatedly shown that public opposition reflects not technical misunderstanding but different ideas from those of experts about how to live well with emerging technologies.” To ensure the safe, equitable, and ethical use of emerging technologies, the voices of all stakeholders must play a role in setting standards for the future. Fostering such a process in the development of AI ethics documents may make it more likely that they will have a tangible and positive impact on practices and regulations in the international AI community and ultimately on how the public is affected by AI in the future. Considering this observation, the fact that so many AI ethics documents are produced by a limited set of leading countries and multi-­ national organizations in the Global North is a lingering source of concern.

7.8  Conclusion In this chapter, we sought to provide context for why AI ethics documents are being created, to review the current literature on these documents, and to move beyond the discussion on whether a presumed consensus on important ethical principles exists. We highlighted unanswered questions around representation and power, the translation of principles to practice, and the need to examine more deeply, something we endeavored to do here, the motivations underlying document creation. Finally, drawing from lessons in the recent case of CRISPR, we argued that for AI ethics documents to achieve beneficial impacts for society, it is essential to foster more diverse and inclusive participatory processes.

References Abebe, Rediet, Solon Barocas, Jon Kleinberg, Karen Levy, Manish Raghavan, and David G.  Robinson. 2020. Roles for Computing in Social Change. In Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency, 252–260. FAT* ’20. Barcelona, Spain: Association for Computing Machinery. https://doi.org/10.1145/3351095.3372871. Alliance for Regenerative Medicine. 2019. The Alliance for Regenerative Medicine Releases Statement of Principles on Genome Editing. Washington, DC: Alliance for Regenerative Medicine. Araki, M., and T.  Ishii. 2014. International regulatory landscape and integration of corrective genome editing into in vitro fertilization. Reproductive Biology and Endocrinology 12: 108. https://doi.org/10.1186/1477-­7827-­12-­108.

140

D. S. Schiff et al.

Automated Decision Systems Task Force. 2019. New York City Automated Decision Systems Task Force Report. New York: Automated Decision Systems Task Force. Bagloee, Saeed Asadi, Madjid Tavana, Mohsen Asadi, and Tracey Oliver. 2016. Autonomous Vehicles: Challenges, Opportunities, and Future Implications for Transportation Policies. Journal of Modern Transportation 24 (4): 284–303. https://doi.org/10.1007/s40534-­016-­0117-­3. Barnett, Jackson. 2020. JAIC Launches Pilot for Implementing New DOD AI Ethics Principles. FedScoop 2 (April): 2020. Bennett, Colin J., and Charles D. Raab. 2017. The Governance of Privacy: Policy Instruments in Global Perspective. London: New York Routledge. Bietti, Elettra. 2020. From Ethics Washing to Ethics Bashing: A View on Tech Ethics from within Moral Philosophy. In Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency, 210–219. FAT* ’20. Barcelona, Spain: Association for Computing Machinery. https://doi.org/10.1145/3351095.3372860. Boddington, Paula. 2017. Towards a Code of Ethics for Artificial Intelligence, Artificial Intelligence: Foundations, Theory, and Algorithms. Cham: Springer International Publishing. https://doi.org/10.1007/978-­3-­319-­60648-­4. British Embassy in Mexico, Oxford Insights, and C Minds. 2018. Towards an AI Strategy in Mexico: Harnessing the AI Revolution. Mexico City: British Embassy in Mexico, Oxford Insights, and C Minds. Brokowski, Carolyn. 2018. Do CRISPR Germline Ethics Statements Cut It? The CRISPR Journal 1 (2): 115–125. https://doi.org/10.1089/crispr.2017.0024. Brundage, Miles, Shahar Avin, Jack Clark, Helen Toner, Peter Eckersley, Ben Garfinkel, Allan Dafoe, et al. 2018. The Malicious Use of Artificial Intelligence: Forecasting, Prevention, and Mitigation. Future of Humanity Institute, Centre for the Study of Existential Risk, Center for a New American Security, Electronic Frontier Foundation, OpenAI. http://arxiv.org/ abs/1802.07228. Cath, Corinne, Sandra Wachter, Brent Mittelstadt, Mariarosaria Taddeo, and Luciano Floridi. 2018. Artificial Intelligence and the ‘Good Society’: The US, EU, and UK Approach. Science and Engineering Ethics 24 (2): 505–528. https://doi.org/10.1007/s11948-­017-­9901-­7. Char, Danton S., Nigam H. Shah, and David Magnus. 2018. Implementing Machine Learning in Health Care – Addressing Ethical Challenges. The New England Journal of Medicine 378 (11): 981–983. https://doi.org/10.1056/NEJMp1714229. Chatila, Raja, and John C. Havens. 2019. The IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems. In Robotics and Well-Being, Intelligent Systems, Control and Automation: Science and Engineering, ed. Maria Isabel Aldinhas Ferreira, João Silva Sequeira, Gurvinder Singh Virk, Mohammad Osman Tokhi, and Endre E. Kadar, 11–16. Cham: Springer. https:// doi.org/10.1007/978-­3-­030-­12524-­0_2. Committee on Bioethics, Council of Europe. 2015. Statement on Genome Editing Technologies. Strasbourg. Daly, Angela, Thilo Hagendorff, Hui Li, Monique Mann, Vidushi Marda, Ben Wagner, Wei Wang, and Saskia Witteborn. 2019. Artificial Intelligence, Governance and Ethics: Global Perspectives, SSRN Scholarly Paper ID 3414805. Rochester: Social Science Research Network. https://papers.ssrn.com/abstract=3414805. Davis, Michael. 2013. Codes of Ethics. In International Encyclopedia of Ethics, edited by Hugh Lafollette, wbiee018. Oxford: Blackwell Publishing Ltd. https://doi. org/10.1002/9781444367072.wbiee018. ———. 2015. Codes of Ethics. In Ethics, Science, Technology, and Engineering: A Global Resource, ed. J.  Britt Holbrook and Carl Mitcham, 2. ed. Farmington Hills, Mich.: Gale, Cengage Learning/Macmillan Reference USA. Duan, Yanqing, John S.  Edwards, and Yogesh K.  Dwivedi. 2019. Artificial Intelligence for Decision Making in the Era of Big Data  – Evolution, Challenges, and Research Agenda. International Journal of Information Management 48 (October): 63–71. https://doi. org/10.1016/j.ijinfomgt.2019.01.021.

7  Global AI Ethics Documents: What They Reveal About Motivations, Practices…

141

Dutton, Tim, Brent Barron, and Gaga Boskovic. 2018. Building an AI World: Report on National and Regional AI Strategies. Toronto: CIFAR. European Commission, High-Level Expert Group on Artificial Intelligence (AI HLEG). 2019. Ethical Guidelines for Trustworthy AI. Brussels: European Commission, High-Level Expert Group on Artificial Intelligence (AI HLEG). Evitt, Niklaus H., Shamik Mascharak, and Russ B. Altman. 2015. Human Germline CRISPR-Cas Modification: Toward a Regulatory Framework. The American Journal of Bioethics 15 (12): 25–29. https://doi.org/10.1080/15265161.2015.1104160. Fast Company. 2017. How Apple, Facebook, Amazon, And Google Use AI To Best Each Other. Fast Company 11 (October): 2017. Fejerskov, Adam Moe. 2017. The New Technopolitics of Development and the Global South as a Laboratory of Technological Experimentation. Science, Technology, & Human Values 42 (5): 947–968. https://doi.org/10.1177/0162243917709934. Fjeld, Jessica, Nele Achten, Hannah Hilligoss, Adam Nagy, and Madhulika Srikumar. 2020. Principled Artificial Intelligence: Mapping Consensus in Ethical and Rights-Based Approaches to Principles for AI, SSRN Scholarly Paper ID 3518482. Rochester: Berkman Klein Center for Internet & Society. https://papers.ssrn.com/abstract=3518482. Floridi, Luciano, and Josh Cowls. 2019. “A Unified Framework of Five Principles for AI in Society.” Harvard Data Science Review. June. https://doi.org/10.1162/99608f92.8cd550d1. Frey, Carl Benedikt, and Michael A. Osborne. 2017. The Future of Employment: How Susceptible Are Jobs to Computerisation? Technological Forecasting and Social Change 114 (January): 254–280. https://doi.org/10.1016/j.techfore.2016.08.019. Gartenberg, Chaim. 2018. Google Wants to Teach More People AI and Machine Learning with a Free Online Course. The Verge 28 (February): 2018. Gibert, Martin, Christophe Mondin, and Guillaume Chicoisne. 2018. Montréal Declaration of Responsible AI: 2018 Overview of International Recommendations for AI Ethics. University of Montréal. Greene, Daniel, Anna Lauren Hoffmann, and Luke Stark. 2019. Better, Nicer, Clearer, Fairer: A Critical Assessment of the Movement for Ethical Artificial Intelligence and Machine Learning. Critical and Ethical Studies of Digital and Social Media 10. Hagendorff, Thilo. 2020. The ethics of AI ethics: An evaluation of guidelines. Minds & Machines. https://doi.org/10.1007/s11023-­020-­09517-­8. Hao, Karen. 2019. In 2020, Let’s Stop AI Ethics-Washing and Actually Do Something. MIT Technology Review 27 (December): 2019. Information Technology Industry Council. 2017. ITI AI Policy Principles. Washington, DC: Information Technology Industry Council (ITI). Institute for Business Ethics. 2018. Business Ethics and Artificial Intelligence. 58. Institute for Business Ethics. Intel. 2017. Artificial Intelligence: The Public Policy Opportunity. Santa Clara: Intel. Jasanoff, Sheila, Krishanu Saha, and J.Benjamin Hurlbut. 2015. CRISPR democracy: Gene editing and the need for inclusive deliberation. Issues in Science and Technology 32: 12. Jobin, Anna, Marcello Ienca, and Effy Vayena. 2019. The Global Landscape of AI Ethics Guidelines. Nature Machine Intelligence 1 (9): 389–399. https://doi.org/10.1038/ s42256-­019-­0088-­2. Johnson, Khari. 2019. How AI Companies Can Avoid Ethics Washing. VentureBeat, July 17, 2019, sec. AI. Kak, Amba. 2020. ‘The Global South Is Everywhere, but Also Always Somewhere’: National Policy Narratives and AI Justice. In Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 307–312. AIES ’20. New York, NY, USA: Association for Computing Machinery. https://doi.org/10.1145/3375627.3375859. Ledford, Heidi. 2020. Quest to Use CRISPR against Disease Gains Ground. Nature 577 (7789): 156–156. https://doi.org/10.1038/d41586-­019-­03919-­0.

142

D. S. Schiff et al.

Mai, Jens-Erik. 2016. Big Data Privacy: The Datafication of Personal Information. The Information Society 32 (3): 192–199. https://doi.org/10.1080/01972243.2016.1153010. McKinsey Global Institute. 2018. Notes From the AI Frontier: Modeling the Impact of AI on the World Economy. McKinsey Global Institute. McNamara, Andrew, Justin Smith, and Emerson Murphy-Hill. 2018. Does ACM’s code of ethics change ethical decision making in software development? In Proceedings of the 2018 26th ACM joint meeting on European software engineering conference and symposium on the foundations of software engineering, 729–733. (Association for Computing Machinery. https://doi. org/10.1145/3236024.3264833. Merck. 2017. Genome Editing Technology – Principle. Merck. Mittelstadt, Brent. 2019. Principles alone cannot guarantee ethical AI. Nature Machine Intelligence 1: 501–507. https://doi.org/10.1038/s42256-­019-­0114-­4. Mittelstadt, Brent Daniel, Patrick Allo, Mariarosaria Taddeo, Sandra Wachter, and Luciano Floridi. 2016. The Ethics of Algorithms: Mapping the Debate. Big Data & Society 3 (2): 205395171667967. https://doi.org/10.1177/2053951716679679. Morley, Jessica, Luciano Floridi, Libby Kinsey, and Anat Elhalal. 2019. “From What to How: An Initial Review of Publicly Available AI Ethics Tools, Methods and Research to Translate Principles into Practices.” Science and Engineering Ethics, December. https://doi.org/10.1007/ s11948-­019-­00165-­5. Mozilla Foundation. 2018. Announcing a Competition for Ethics in Computer Science, , with up to $3.5 Million in Prizes. The Mozilla Blog. October 10, 2018. National Academies of Sciences, Engineering, and Medicine. 2015. International Summit on Human Gene Editing: A Global Discussion. Washington, DC: The National Academies Press. https://doi.org/10.17226/21913. ———. 2017. Human Genome Editing: Science, Ethics, and Governance. Washington, DC: National Academies Press. https://doi.org/10.17226/24623. O’Brien, Tim, Steve Sweetman, Natasha Crampton, and Venky Veeraraghavan. 2020. A Model for Ethical Artificial Intelligence. World Economic Forum 14 (January): 2020. OECD. 2019. OECD: Recommendation of the Council on Artificial Intelligence, OECD/ LEGAL/0449. OECD Legal Instruments. Paris: OECD. Organizing Committee of the Second International Summit on Human Genome Editing. 2018. Statement by the Organizing Committee of the Second International Summit on Human Genome Editing. Washington, DC: National Academies of Sciences, Engineering, and Medicine. Osoba, Osonde A. 2020. Technocultural Pluralism: A ‘Clash of Civilizations’ in Technology?. In Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 132–137. AIES ’20. New York: Association for Computing Machinery. https://doi.org/10.1145/3375627.3375834. PricewaterhouseCoopers. 2017. Sizing the Prize: What’s the Real Value of AI for Your Business and How Can You Capitalise? London: PricewaterhouseCoopers. Qatar Center for Artificial Intelligence. 2019. Blueprint: National Artificial Intelligence Strategy for Qatar. Ar-Rayyan: Qatar Center for Artificial Intelligence (QCAI), Qatar Computing Research Institute (QCRI), Hamad Bin Khalifa University. SAP. 2018. SAP’s Guiding Principles for Artificial Intelligence. Waldorf: SAP. Scherer, Matthew U. 2016. Regulating Artificial Intelligence Systems: Risks, Challenges, Competencies, and Strategies. Harvard Journal of Law & Technology 29 (2): 353–400. Schiff, Daniel, Justin Biddle, Jason Borenstein, and Kelly Laas. 2020. What’s Next for AI Ethics, Policy, and Governance? A Global Overview. In Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 153–158. AIES ’20. New York: Association for Computing Machinery. https://doi.org/10.1145/3375627.3375804. Schiff, Daniel, Jason Borenstein, Kelly Laas and Justin Biddle. 2021. AI Ethics in the Public, Private, and NGO Sectors: A Review of a Global Document Collection. IEEE Transactions on Technology and Society 2 (1): 31–42. https://doi.org/10.1109/TTS.2021.3052127. Schwab, Klaus. 2016. The Fourth Industrial Revolution. First U.S. edition. New  York: Crown Business.

7  Global AI Ethics Documents: What They Reveal About Motivations, Practices…

143

The IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems. 2019. Ethically Aligned Design: A Vision for Prioritizing Human Well-Being with Autonomous and Intelligent Systems, First Edition. Piscataway: IEEE. The Institute for Ethical AI & ML. 2020. The AI-RFX Procurement Framework. Todd, Deborah. 2019. Microsoft Reconsidering AI Ethics Review Plan. Forbes, June 24, 2019, sec. Innovation. Villani, Cédric, Marc Schoenauer, Yann Bonnet, Charly Berthet, Anne-Charlotte Cornut, François Levin, and Bertrand Rondepierre. 2018. For a Meaningful Artificial Intelligence: Towards a French and European Strategy. French Parliament. https://frenchamerican.org/young-­leader/ cedric-­villani/ Vincent, James. 2019. Finland Is Making Its Online AI Crash Course Free to the World. The Verge 18 (December): 2019. Weber, Max. 1949. ‘Objectivity’ in Social Science and Social Policy. In The Methodology of the Social Sciences, by Max Weber, Edward Shils, and Henry A Finch, 49–112. Glencoe: Free Press. West, Darrell M. 2018. The Future of Work: Robots, AI, and Automation. Washington, DC: Brookings Institution Press. Zeng, Yi, Enmeng Lu, and Cunqing Huangfu. 2018. Linking Artificial Intelligence Principles. ArXiv:1812.04814 [Cs] (December) http://arxiv.org/abs/1812.04814. Daniel S. Schiff PhD Candidate, School of Public Policy, Georgia Institute of Technology, USA; [email protected]. Daniel Schiff studies issues related to the intersection of AI and policy, including research on education, labor, misinformation, governance of AI, corporate social responsibility, and other social and ethical implications of AI.  

Kelly Laas Librarian and Ethics Instructor, Center for the Study of Ethics in the Professions, Illinois Institute of Technology, USA; [email protected]. Her research interests include the history and use of codes of ethics in professional fields, ethics education in STEM, research ethics, and integrating ethics into technical curricula.  

Justin B. Biddle Associate Professor, School of Public Policy, Georgia Institute of Technology, USA; [email protected]. Justin Biddle works at the intersection of a number of fields, including philosophy of science and technology, ethics of emerging technologies, and science and technology policy. He is particularly interested in the ethics of artificial intelligence and the role of value judgments in the design and development of computing technologies.  

Jason Borenstein Director of Graduate Research Ethics Programs, School of Public Policy and the Office of Graduate Studies, Georgia Institute of Technology, USA; [email protected]. Dr. Borenstein’s teaching and research interests include engineering ethics, AI & robot ethics, research ethics, and bioethics. His ethics-related research includes topics such as autonomous vehicles, human-robot interaction, community engagement, and AI & bias.  

Chapter 8

Addressing Intelligent Systems and Ethical Design in the IEEE Code of Ethics Greg Adamson and Joseph Herkert

Abstract  This chapter examines the process undertaken by IEEE to modify its Code of Ethics in response to a changing global technology landscape in the early twenty-first century. The changes, which were incorporated into the Code by the unanimous vote of the IEEE Board of Directors (BoD) in November 2017, include modifications to the wording of the “paramountcy clause” as well as inclusion in the Code the concepts of “ethical design,” “sustainable development,” “societal implications of technology,” and “intelligent systems.” The process was led by volunteers (including the authors) and involved an Ad Hoc Committee on Ethics Programs, consideration at two meetings of the BoD, and solicitation of IEEE members for responses to a draft proposal. The chapter attributes these changes to the intersection of three events: growing interest within the broader community around ethics and artificial intelligence; the creation of bodies within the IEEE to consider ethical issues; and a longstanding concern within the IEEE, particularly through the work of the Society on Social Implications of Technology (SSIT), for an effective code to meet the responsibility of technologists. The chapter describes this process and context, with the aim of increasing understanding of the Code revisions on the part of engineering practitioners and researchers as well as scholars of engineering ethics. Keywords  IEEE · Intelligent systems · Ethical design · Paramountcy clause · Sustainable development · Social implications of technology

G. Adamson (*) University of Melbourne, Melbourne, Victoria, Australia e-mail: [email protected] J. Herkert North Carolina State University, Raleigh, NC, USA e-mail: [email protected] © Springer Nature Switzerland AG 2022 K. Laas et al. (eds.), Codes of Ethics and Ethical Guidelines, The International Library of Ethics, Law and Technology 23, https://doi.org/10.1007/978-3-030-86201-5_8

145

146

G. Adamson and J. Herkert

8.1  Introduction Professional Codes of Ethics are not set in stone--rather, they are a “living practice” (Davis 2007) that are and should be reviewed and revised. Davis (2007) suggests that such a review should be conducted periodically by an “ethics committee” or every 5 to 10 years on an ad hoc basis. While the periodic review of codes is important, we would argue that of even greater importance is a “bottom-up” approach; that is, when social values change, professional organizations should be alert to such changes and be prepared to modify their Code. Since 2013, the Code of Ethics of the IEEE,1 a technical professional association with 420,000 members in 161 countries, has been revised three times. In 2013 the Code’s provision on diversity and inclusion was revised to “add a prohibition against discrimination to general language about treating others fairly” and to expand the non-discrimination categories to include sexual orientation and gender identity (Riley et al. 2015). In 2017 IEEE discussed and adopted significant revisions to the Code as a result of changes in social values outside of and within IEEE and ongoing developments in Artificial Intelligence and other emerging technologies in IEEE’s field of interest. In 2020 the Code was revised in response to further concerns about diversity and inclusion. The 2020 revisions (IEEE 2020b) retained the 2013 and 2017 changes while adding “high-level principles to focus members on key elements of the Code, a commitment not to engage in harassment, and the protection of the privacy of others” (Russell 2020). None of these revisions resulted from a scheduled review of the Code. Rather, all were the product of a bottom-up approach occasioned by recognition of changing social values and the importance of incorporating such values in the Code. To illustrate the importance of a bottom-up approach, this chapter describes the 2017 changes, including their motivation, context, and history. The impetus for these changes was the emerging public discussion of ethics and “Artificial Intelligence (AI).” Specifically, a 2016 US policy forum, the AI Now Symposium, involving the White House and New  York University, issued public recommendations that professional associations review their codes and update them in line with significant impacts expected from the emerging technology (Crawford et al. 2016, 20–21). Within the IEEE at the time, a series of ethics-related activities were developing, both around the way that technologists build technology and the way that technologists behave. The changes addressed discussions that had taken place at the time the IEEE Code of Ethics gained its current principles-based approach at the beginning of the 1990s. It also picked up aspects that had been under discussion since the 1970s.

1  “IEEE’s membership has long been composed of engineers, scientists, and allied professionals. These include computer scientists, software developers, information technology professionals, physicists, medical doctors, and many others, in addition to IEEE’s electrical and electronics engineering core. For this reason, the organization no longer goes by the full name [Institute of Electrical and Electronics Engineers], except on legal business documents, and is referred to simply as IEEE.” (IEEE 2020a)

8  Addressing Intelligent Systems and Ethical Design in the IEEE Code of Ethics

147

The IEEE’s mechanism for changing its Code of Ethics involves both membership discussion and approval by a two-thirds majority of the IEEE’s Board of Directors (BoD), the organization’s governing board. The membership discussion elicited more than 25 written submissions, and the final recommendations were adopted unanimously in November 2017. The revisions included a reference to “intelligent systems” within the Code, the first time that the IEEE Code of Ethics has been drawn attention to a specific technology. In addition, they strengthened the responsibility of technologists for the “safety, health and welfare of the public,” requiring members to hold these “paramount,” flagged requirements for consideration of ethical design and sustainable development, and called attention to the relationship between technology and society. This chapter also looks at lessons learned from other societies in this process and provides background context for the events described. While there is a significant body of literature on technology and ethical practice, there has been less focus on the development of codes of ethics within professional engineering and technology societies. Layton (1986) provided a thorough description of U.S. professional engineering associations in the first half of the twentieth century. Pfatteicher (2003) detailed the establishment of the American Society of Civil Engineer’s first Code in 1914. Pugh (2009) described the history and circumstances around his introduction of a revised IEEE Code in 1990. Davis (2009) provides historical context on software engineering. Tang and Nieusma (2017) have documented the SSIT engagement with the 1974 IEEE Code of Ethics and subsequent practice. In addition to the IEEE Code of Ethics, IEEE has many other documents, programs, conferences, and awards that have a primary or significant focus on the fields addressing ethics and technology. Adamson et  al. (2019) list more than 50 such activities. These sit in two overlapping categories: a) the behavior of IEEE members and other technologists in their professional practice; and b) ethical considerations arising from technology and its development. The first category includes the IEEE Computer Society/Association for Computing Machinery Software Engineering Code of Ethics and Professional Conduct, the IEEE Engineering in Medicine and Biology Society Code of Ethics, the IEEE Code of Conduct, and the IEEE Policy against Discrimination and Harassment. The second category has arisen in recent years and includes the IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems. Conferences with a significant ethics focus include the annual International Symposium on Technology and Society, the IEEE International Symposium on Ethics in Science, Engineering and Technology (2014, 2016), and the IEEE Conference on Norbert Wiener in the twenty-first century (2014, 2016, 2020a, b). Awards include the IEEE SSIT Carl Barus Award for Outstanding Service in the Public Interest, the IEEE SSIT Norbert Wiener Award for Professional and Social Responsibility (formerly administered by the Computer Professionals for Social Responsibility), the IEEE Student Ethics Competition, and the IEEE Award for Distinguished Ethical Practices. Organizational

148

G. Adamson and J. Herkert

units within IEEE include the Ethics and Member Conduct Committee, the Society on Social Implications of Technology, the Robotics and Automation Society, and the Computational Intelligence Society. Together these programs and activities create a complex range of ethics-related practice reflecting the size, diversity, and responsibilities of the 420,000-member global organization.

8.2  Background and Motivation for Changes In 1972 a group of engineers undertook the difficult task of drawing attention to the IEEE’s ethical and societal responsibilities, forming IEEE’s Committee on Social Implications of Technology (CSIT). In 1982 this became the Society on Social Implications of Technology (SSIT) (Stephan 2006). These engineers based their initiative on real-world circumstances and played an essential part in changing the culture of the IEEE. Since the 1990s, the issues which SSIT has championed have become mainstream. For example, in 1994, IEEE Women in Engineering was established. In 2009, IEEE as a whole was ready to embrace the SSIT perspective, adopting the tagline “Advancing Technology for Humanity.” In 2015 both the IEEE Humanitarian Activities Committee and IEEE Sustainable ICT were established. These addressed issues which CSIT and SSIT had championed since 1972. SSIT members were involved in these initiatives. Since 2016, SSIT, the IEEE Standards Association, the IEEE Technical Activities Board (TAB), and other units of IEEE have placed a renewed emphasis on ethics. In 2017 IEEE President Karen Bartleson gave a significant presentation on IEEE’s ethical responsibilities to IEEE’s triennial Sections Congress in Sydney. This was one of 10 separate presentations on ethics, policy, and the interrelation of technology and society at that congress. 2017 also saw an update to the IEEE Code of Ethics, one that emphasized that the early views of SSIT had become mainstream within IEEE. The IEEE first adopted a Code of Ethics as the American Institute of Electrical Engineers (AIEE) in 1912. In the early twentieth century, during the time that this and other early engineering codes were established, societal engagement of technologists could be seen in at least two areas: resource preservation, and scientific management. Early efforts to eliminate waste of resources included the first U.S. Conference of Governors at the White House in 1908. In his opening speech, President Theodore Roosevelt stated: “the natural resources of our country are in danger of exhaustion if we permit the old wasteful methods of exploiting them longer to continue” (Roosevelt 1908). The second, scientific management, is known today through Taylor’s time and motion studies, which strongly influenced twentieth-­century factory management. Taylor’s work is criticized for dehumanizing factories (and critiqued in the Charlie Chaplin film Modern Times). These criticisms address the limitations of efficiency as a dominant metric in technology development. His work nevertheless retains enormous influence in technology thinking today. One of his team, Henry Gantt, is memorialized in the Gantt charts used in project management.

8  Addressing Intelligent Systems and Ethical Design in the IEEE Code of Ethics

149

The idea that technologists through their work have an innate interest in the well-­ being of their communities is contested. Some criticism begins with the observation that technologists tend to transition from professional engineering to business management as their careers progress (Layton 1986). As the managerial perspective and social benefit often conflict, technologists are then found guilty through association. In contrast, Veblen (1921, 79), a radical observer of the industrial system, described technologists as, “by force of circumstance, the keepers of the community’s material welfare.” The circumstances he points to are a concern for technical performance and efficient use of resources. He describes the early role of the professional engineer as the trouble-shooter brought in by finance capital to ensure that heads of industry adopt good engineering practices (which they would often seek to avoid). In their case, profitability had a strong mandate. For the vast majority of technologists, however, once employed, their primary professional interest was in solving problems, rather than in their company’s profitability. Veblen and other commentators pointed out that if they placed their company’s profitability first, technologists would not be doing their jobs as technologists. This view was echoed by mainstream technology leader and commentator Vannevar Bush (Layton 1986). By the 1930s, the successes of scientific management were threatening to overwhelm the value of engineering itself applied to the industrial process. Writing in 1939, Bush appealed: “We may as well resign ourselves to a general absorption as controlled employees, and to the disappearance of our independence. We may as well conclude that we are merely one more group of the population… forced in this direction and that by the conflict between the great forces of a civilized community, with no higher ideals than to serve as directed” (Bush 1939). Davis argues that, unlike the professional areas of law and medicine, the primary responsibility of the engineer is not to the client or employer but to the broader community: ‘The defense “I’m an engineer, but I didn’t promise to follow the code and therefore did nothing wrong,” is never accepted. The profession answers, “You committed yourself to the code when you claimed to be an engineer”’ (Davis 1998, 37). A similar conclusion can be reached by considering Kant’s “duty” as a feature of the reputation of technology professionals. For Kant, duty is something without price, having “an inner worth, i.e., dignity” (Kant [1785] 2012, 46). The need for engineering expertise to maintain independence from the employer’s financial interest was dramatically underlined in the 1986 space shuttle Challenger disaster. When urging an engineering manager of a shuttle contractor to approve the Challenger launch in unusually cold weather, his manager said, “take off you engineering hat and put on your management hat” (Davis 1991). The contractor’s engineers expressed unanimous concern about the high risk of launching at such a low temperature (well beyond prior experience and with little testing). Nevertheless, the contractor’s engineering managers then gave approval, resulting in disaster due to a failure in Solid Rocket Booster O-ring seals, the component that the engineers were concerned about.

150

G. Adamson and J. Herkert

The Challenger example also shows that financial interest is only part of the threat to ethical behavior. Governmental pressure and rewards for political, military, bureaucratic, or other goals are also significant considerations (Vaughan 1997). Another alternative to Veblen’s “force of circumstance” view is perceived aspiration. Layton argues that since the late nineteenth century, engineers have had a driving concern for social status, particularly to achieve the prestige of law and medicine. This is seen to emerge from the public responsibility taken by lawyers and doctors. “Thus, engineers have argued that in order to gain more status, their profession should show a greater sense of social responsibility” (Layton 1986, 6). Perhaps this is the case, but there are many examples of the desire of technologists to “solve problems,” which at the least implies a sense of social responsibility. Social status could then be used as a rationale to codify this responsibility. There is no significant body of evidence which shows that technologists do not truly care about social responsibility, although in many cases, technologists may display a poor understanding of what social responsibility is. The fields of industrial design, human-­ computer interaction (HCI), user-centered design, user experience (UX), and customer experience (CX) all fill gaps in technologists’ perception of how to solve a problem. Since codes of ethics are normative, their influence also depends on linkage to education, standards development, public attitude, the attitude of fellow technologists, and the further connection of each of these to regulation, legislation, and legal precedent. For example, a court-appointed expert in ethics describes the process in Bosnia and Herzegovina as follows: Since information and communication technologies are relatively new and, moreover, rapidly growing in our legislation there is a problem of non-coverage of all areas through laws and by-laws. As a court expert, I am obliged to give my opinion, referring to relevant laws, by-laws, international standards and good practices (Law on Experts of the Federation of Bosnia and Herzegovina). … my obligation under the Code of Court Experts and the Code of Engineering Ethics is to give my opinion within the framework of international standards and good engineering practices. The Court had no objections to such an explanation. (Haris, Personal correspondence December 6, 2018)

Following the creation of IEEE through the merging of the AIEE and the Institute of Radio Engineers in 1963, a new code was adopted in 1974 (Stern 1975). It has since been amended several times (IEEE 1979, 1987, 1990, 1997, 2006, 2013, 2017, and 2020b). Codes of various professional societies, including IEEE, can be found in The Ethics Code Collection of the Center for the Study of Ethics in the Professions at the Illinois Institute for Technology (http://ethicscodescollection.org/). The current IEEE Code of Ethics is principles-based, providing ten canons for members to follow. This format was introduced in 1990 and replaced the lengthier approach of earlier IEEE Codes. The author of this approach, Emerson W. Pugh, had been IEEE President in 1989. In a paper (2009), “Creating the IEEE Code of Ethics,” he provides a history of the Code, beginning with a 1906 presidential speech on “Engineering Honor.” Pugh changed the structure of the Code from a detailed listing to a much shorter set of principles. He explained, “The draft had exactly ten canons. I liked the number ten because people throughout the world have

8  Addressing Intelligent Systems and Ethical Design in the IEEE Code of Ethics

151

ten fingers, they use a decimal system for counting, and many are accustomed to having a moral code specified by ten commandments” (Pugh 2009). The 1991 IEEE Code was one of the first engineering codes to note the importance of environmental protection and improving “understanding of technology” (Herkert 2009). As noted above, the creation of the IEEE in 1963 opened a new chapter in the responsibility of the technologist. Tang and Nieusma (2017) describe key points of discussion for CSIT when the first IEEE Code of Ethics was adopted in 1974. These primarily focus on the paramountcy of the public good. As described below, this wording is now found in the IEEE Code of Ethics. The most notable of the changes to the IEEE Code made between 1990 and 2017 occurred in 2013 when the IEEE Code’s eighth provision on diversity and inclusion was updated and strengthened making IEEE “…the first engineering professional society to place non-discrimination on the basis of gender identity and expression into its Code of Ethics, and among the first adding sexual orientation.” (Riley et al. 2015). This legacy of both ground-breaking achievements and, at times, controversial outcomes in the history of IEEE Codes informed the drafters of the 2017 revisions.

8.3  Process In 2016, the Technical Activities Board (TAB) Ad Hoc Committee on Ethics decided to respond to a document prepared by New York University and the Obama White House. This document considered what changes were required to codes of ethics in light of enormous investment in AI technologies currently underway. The report made a series of recommendations, including: Work with professional organizations such as The Association for the Advancement of Artificial Intelligence (AAAI), The Association for Computing Machinery (ACM) and The Institute of Electrical and Electronics Engineers (IEEE) to update (or create) professional codes of ethics that better reflect the complexity of deploying AI and automated systems within social and economic domains. Mirror these changes in education, making courses in civil rights, civil liberties, and ethics required training for anyone wishing to major in Computer Science. (Crawford et al. 2016, 20–21).

It is not common for discussions in public policy to explicitly call on professional associations to take specific actions. A discussion of this is beyond the scope of this chapter, but in part, it appears to relate to the separation between the world of professional associations and the world of public policy development. IEEE has a Global Public Policy Committee, whose responsibility is to oversee where possible the growth of global policies related to the interests of technologists, broadly defined. For example, in June 2016, the IEEE BoD adopted a policy expressing concern over “backdoor” methods to circumvent security technologies that are designed into technologies, often at the request of governments (IEEE 2018). This subject relates to effective technology and therefore was a good candidate for global policy agreement.

152

G. Adamson and J. Herkert

The IEEE also engages with organizations such as the United Nations and the World Economic Forum. On the other hand, the IEEE has no formal process or mechanism to respond to public requests from external organizations, such as the AI Now proposal. The call came at a period of renewed IEEE interest in ethics. In 2016 an ad hoc committee had been established within the IEEE Technical Activities Board (TAB). This is the group responsible for the 45 or so technical societies and councils which members may choose to join. The TAB ad hoc committee proposed that IEEE respond by reviewing its Code of Ethics. In 2017 IEEE’s ruling body, the Board of Directors created the Ad Hoc Committee on Ethics Programs, which in part had the task of undertaking the review. Once the review had commenced, the ad hoc committee considered additional topics beyond the AI Now report. Two of these responded to critical developments in recent years: the UN adoption of Sustainable Development Goals for 2030; and the “ethics in design” work done by IEEE and others. Long-standing questions were also considered, including the paramountcy of the public welfare, and the relationship between technology and society. In all, there were five areas updated within canons 1 and 5 of the Code. During the process of reviewing the IEEE Code, the ad hoc committee considered the codes of other major engineering and computer societies, including communications with the in-house legal counsel of one society. In particular, proposed changes related to sustainable development and the paramountcy of the public welfare were in part influenced by these other codes. IEEE Code of Ethics revisions has a process of initiation, review, membership discussion, and adoption. First, the Board of Directors adopts a draft for membership discussion. Then the draft is publicized during a member comment period. The BoD then has the power to amend the submission at a second meeting and requires a two-thirds majority at this meeting to adopt the amended Code of Ethics. The ad hoc committee met from the beginning of 2017 and proposed a draft at the June 2017 Board of Directors meeting. The BoD adopted the first draft without opposition. It was then widely distributed to members, particularly through The Institute, which was sent to all 420,000 members, with a discussion period of three months. Some 26 detailed submissions addressed all aspects of the proposed revisions and other areas related to the Code. They came from members around the world, ranging from those with a deep understanding of the history of codes of ethics to those with a strong interest in and concern for the behavior of technologists and their role in the world. In addition to input from IEEE members, committee members sought views of a range of people interested in the subject (correspondence in authors’ files). This early informal feedback helped shape the proposed revisions to the Code. N.R. Narayana Murthy, author of A Better India, A Better World, and a founder of Infosys, focused on the technologist’s responsibility, requesting fellow engineering professionals ‘to ask whether their action will lead to more respect for their profession and for themselves in every action of theirs.’ Rafael Capurro is the founder of the International Centre on Information Ethics, who spent ten years working on

8  Addressing Intelligent Systems and Ethical Design in the IEEE Code of Ethics

153

Opinions for the European Commission with the European Group of Ethics in Science and New Technologies (EGE). He suggested engaging with processes of consultation, opening different kinds of ways or rules for future action (without taking away the responsibility of the decision-makers such as parliaments) “in order not to proceed too quickly with practical do’s and do not’s.” Former IEEE President Arthur Winston connected the needs of ethical professional behavior to the responsibility of the professional within their professional community. This is important when confirming the suitability of candidates for election. A person may have contributed to IEEE “but has social views not in keeping with our mores such as racial or sexual discrimination.” The topic also attracted significant interest from each of the IEEE’s major organizational units considering the changes. These included the Technical Activities Board, Member and Geographical Activities, Publication Services and Products Board, IEEE USA, and IEEE Standards Association. Each of these groups had been invited as a group to submit feedback to the proposed revisions during the discussion period. Feedback from individuals and organizational units ranged widely, and the ad hoc committee incorporated elements of this feedback before resubmitting the proposed changes to the Board of Directors. The BoD unanimously adopted the wording, as presented in November 2017 (IEEE 2017). The final proposed changes presented to the Board of Directors and adopted are as follows. Strikethroughs are words removed from the previous version; underscores are new text: [IEEE members] agree: 1. to accept responsibility in making decisions consistent with hold paramount the safety, health, and welfare of the public, to strive to comply with ethical design and sustainable development practices, and to disclose promptly factors that might endanger the public or the environment; 5. to improve the understanding of technology; its appropriate application, and potential consequences; by individuals and society of the capabilities and societal implications of conventional and emerging technologies, including intelligent systems;

8.4  Specific Changes The five major areas of change included above are as follows: Paramountcy of Public Welfare  The first Fundamental Canon of a revised Code of Ethics of Engineers, adopted in 1974 by the Engineers Council for Professional Development (eventually known as ABET), held that “Engineers shall hold paramount the safety, health, and welfare of the public in the performance of their professional duties.” This provision, which has come to be known as the “paramountcy clause,” has subsequently been adopted in one form or another in most modern

154

G. Adamson and J. Herkert

codes of engineering ethics. For example, the American Society of Mechanical Engineers (ASME) Code’s first Fundamental Canon has this exact wording while the American Society of Civil Engineers (ASCE) Code’s first Canon states “Engineers shall hold paramount the safety, health, and welfare of the public and shall strive to comply with the principles of sustainable development in the performance of their professional duties.” Beginning in 1990, the first provision of the IEEE Code of Ethics pledged its members “to accept responsibility in making decisions consistent with the safety, health, and welfare of the public, and to disclose promptly factors that might endanger the public or the environment.” While it remains unclear as to why the authors of the 1991 Code chose “consistent with” rather than “hold paramount,” in order to avoid ambiguity, the new Code adopts the conventional wording “to hold paramount the safety, health, and welfare of the public.” It should be noted that in addition to including the importance of ethical design and sustainability (discussed below), the new version of the First Canon retains wording from prior versions that pledges members to “disclose promptly factors that might endanger the public or the environment.” This is a unique feature of the IEEE Code in that it highlights the importance of disclosure when the public or environment is endangered. Ethical Design Practices  A recent development in engineering practice is the concept of “ethics in design.” This concept raises at least two questions: First, do designers incorporate moral values in their work, that is, are ethical issues valid considerations in the design of technology? Second, if they are, is there anything we can do about them? From 2016 the IEEE had been working on programs that consider the second question. (The first question had been assumed in the affirmative, illustrating the changing assumptions within the technology community. Further discussion of this is outside the scope of this chapter.) In 2016 IEEE Standards Association (IEEE-SA) began work on the Global Initiative on Ethics of Autonomous and Intelligent Systems. This drew together a multi-disciplinary team of several hundred people who looked at the questions of ethics and technology in the broadest sense. The first version of this report was released for comment in late 2016. A second version appeared in 2017 and attracted more than 300 pages of comments from 72 organizations and individuals. A revised version, listed as the first edition, was published in 2019 (IEEE 2019). The approach taken by the Global Initiative has been to look at the question of ethics in AI from several perspectives and then to integrate those perspectives and develop key recommendations. IEEE is not the only organization working in this field. However, it is unique in engaging the views of technology professionals covering the complete cyber-physical range of technologies. A subsequent initiative in IEEE has been within the Technical Activities Board (TAB). IEEE-SA begins from the perspective of how standards can be developed to help technology succeed in a human-supportive way. IEEE-SA provides a platform

8  Addressing Intelligent Systems and Ethical Design in the IEEE Code of Ethics

155

for developing these standards and is prominent in the field. TAB approaches the issue with the question, with what technologies and technological challenges will IEEE’s members be working in the near future? TAB established the TechEthics initiative in 2017, based on prior work of its ethics ad hoc committee mentioned above. TechEthics leads a range of activities related to ethics in design across IEEE. Sustainable Development Practices  In 2009, the IEEE adopted the tagline “Advancing Technology for Humanity,” often expanded to “Advancing Technology for the Benefit of Humanity.” Adopting this tagline raises some questions, perhaps the leading one being, “Who decides?” For example, different countries may prioritize different technological goals, from the right to bear arms to the right to safe drinking water. The United Nations 17 Sustainable Development Goals (SDGs) for 2030 represent the outcome of significant global discussion, involving far broader communities than IEEE could engage. The goals provide a means of agreeing on benefit without an interminable argument about whose view of benefit should prevail. By including sustainable development in the Code, a separate argument about whether all IEEE members endorse every part of the SDGs is sidestepped. As an organization, IEEE recognizes the effort represented by the SDGs and points to that as a starting point in ethical consideration of technology. Societal Implications of Technology  While previous IEEE Codes of ethics had encouraged members to help in understanding technology, making explicit mention of the societal implications of conventional and emerging technologies was a new step. IEEE’s Society on Social Implications of Technology has made two distinct contributions to IEEE and the world of technology. First, it has undertaken key work in explaining the connection between technology and society. Currently, the SSIT’s five “pillars” are Sustainable Development, Ethics and Human Values, Universal Access to Technology, Protecting the Planet, and Social Impacts of Technology. Second, beginning with the work of Steve Unger and others in the 1970s, SSIT supports the technology community’s principle of providing support to technologists victimized for holding to ethical principles. For example, SSIT’s Carl Barus Award for Outstanding Service in the Public Interest was created for that purpose, with the first award made in 1978 in recognition of three engineers in San Francisco who raised safety issues during the design of the Bay Area Rapid Transit (BART) system. The SSIT has initiated several events that address ethics, including the International Symposium on Technology and Society (ISTAS), the IEEE Conference Series on Norbert Wiener in the twenty-first century, the IEEE Conference on Technology and Society in Asia, and the IEEE International Symposium on Ethics in Engineering, Science, and Technology. In addition to SSIT’s own activities, SSIT leaders have played critical roles in IEEE societal activities including the Humanitarian Activities Committee, the Global Public Policy Committee, the IEEE Internet Initiative, Women in Engineering, the Sustainable Technology Conference, TechEthics, and the Ethically Aligned Design reports. SSIT has led through its publication IEEE Technology & Society

156

G. Adamson and J. Herkert

Magazine, articles for Proceedings of the IEEE, and other IEEE journals’ guest editor issues. It is launching the IEEE Transactions on Technology & Society in 2020. SSI now hosts the Norbert Wiener Award for Social Responsibility, established by Computer Professionals for Social Responsibility and is a sponsor of standards within IEEE. SSIT is the primary home for IEEE’s engagement with the science, technology, and society (STS) field. For example, Louis Bucciarelli and Deborah Johnson spoke at the 2018 ISTAS conference, and Judy Wajcman addressed the 2016 conference on Norbert Wiener in the twenty-first century. SSIT participated in the Society for Social Studies of Science (4S) conferences in 2016 and 2018 with a panel and a presentation, respectively. Intelligent systems  The inclusion of a reference to intelligent systems is a significant break with tradition, being the first time that the IEEE Code has referred to a specific technology. The rationale for including this reference aligned with the reason explained by AI Now for professional associations to review their codes. Intelligent systems (which also come under the names of Artificial Intelligence, Autonomous and Intelligent Systems, Artificial General Intelligence, and others) are a fundamental departure from previous technologies. A technology, which can, in turn, create other technologies changes the way technologists need to think about technology. While once there was a wide range of potential impacts of technology, “intelligent” systems with the capacity to solve problems in novel ways open an enormous range of opportunities as well as ethical issues. The technology of intelligent systems and its social and ethical implications are of great interest within IEEE. For example, the IEEE Computer Society publishes the bi-monthly journal IEEE Intelligent Systems. Topics discussed at the SSIT-sponsored ISTAS 2018 conference included social and ethical implications of intelligent systems, including autonomous vehicles, smart electrical grids, the internet of things, robot companions for children, and robot caregivers.

8.5  Conclusion The philosopher Michael Davis has said of professional codes of ethics (2007): Like people, codes age. What was up-to-date in 1980 may sound ancient even 25 years later. Part of keeping a code a living practice is providing for regular re-examination. And even if codes did not age, they would be imperfect, with experience revealing unexpected imperfections or confirming the existence of imperfections already suspected. While it is never possible to have a perfect code, it is always possible not only to improve what age has damaged but to improve what has aged well…. The work of [revising] the code can be given to… a temporary committee…. Such a body could carefully review the entire code, survey the users, or compare it with other codes, propose revisions based on what it learned, and then dissolve.

8  Addressing Intelligent Systems and Ethical Design in the IEEE Code of Ethics

157

The revisions to the IEEE Code of Ethics discussed in this chapter were initially motivated by an interest in updating the Code in light of the concept of ethical design and the emergence of intelligent systems. The draft revisions were prepared by an ad hoc committee based on a complete review of the current Code and other engineering codes and input from IEEE organizational entities and individual members. In the process of developing the revisions, an inconsistency with other engineering codes in the wording of the paramountcy clause was corrected, and additional updates were incorporated regarding sustainable development practices and societal implications of technology. All of the 2017 revisions speak to the need for engineering in general and IEEE, in particular, to embrace a greater sense of social responsibility in developing, articulating, and implementing their ethical principles (Basart and Serra 2013). Other revisions to the Code in 2013 and 2020 strengthened provisions on harassment and privacy. Future revisions to the Code that may merit consideration include directly addressing the concept of social justice in terms of the ethical responsibilities of engineers (Riley and Lambrinidou 2015) and expanding the scope of the IEEE Code from a code for IEEE members to a code for all technologists practicing in IEEE’s fields of interest.

References Adamson, Greg, John Havens, and Raja Chatila. 2019. “Designing a values-driven future for ethical autonomous and intelligent systems (A/IS)” Proceedings of the IEEE, January. Basart, Josep M., and Montse Serra. 2013. Engineering ethics beyond engineers’ ethics. Science and Engineering Ethics 19 (1): 179–187. Bush, Vannevar. 1939. The professional spirit in engineering. Mechanical Engineering 61: 195–198. Crawford, Kate, Meredith Whittaker, Madeleine Clare Elish, Solon Barocas, Aaron Plasek, and Kadija Ferryman. 2016. The AI Now Report: The social and economic implications of artificial intelligence technologies in the near-term. https://ainowinstitute.org/AI_Now_2016_Report. pdf. Accessed March 10, 2020. Davis, Michael. 1991. Thinking like an engineer: The place of a code of ethics in the practice of a profession. Philosophy & Public Affairs 20 (2): 150–167. ———. 1998. Thinking like an engineer: Studies in the ethics of a profession, 37. Oxford, UK: Oxford University Press. ———. 2007. Eighteen rules for writing a code of professional ethics. Science and Engineering Ethics 13 (2): 171–189. ———. 2009. Code making: How software engineering became a profession. Chicago: Center for the Study of Ethics in the Professions. http://ethics.iit.edu/sea/1/book/Full%20book.pdf. Accessed March 10, 2020. Herkert, Joseph. 2009. Macroethics in engineering: The case of climate change. In Engineering in context, ed. Steen H.  Christensen, Martin Meganck, and Bernard Delahousse, 435–445. Copenhagen: Academica. IEEE. 1979. Code of Ethics. Ethics Codes Collection. Center for the Study of Ethics in the Professions, Illinois Institute of Technology. http://ethicscodescollection.org/ detail/8f3b6e4a-­8ac8-­49a6-­a2f6-­739f72eb9ea3. Accessed March 10, 2020.

158

G. Adamson and J. Herkert

———. 1987. Code of Ethics. Ethics Codes Collection. Center for the Study of Ethics in the Professions, Illinois Institute of Technology. http://ethicscodescollection.org/detail/ dbe9041d-­2ce8-­42fb-­8028-­c66f2f96ee95. Accessed March 10, 2020. ———. 1990. Code of Ethics. Ethics Codes Collection. Center for the Study of Ethics in the Professions, Illinois Institute of Technology. http://ethicscodescollection.org/detail/5030eb1a-­ ff9d-­4124-­813f-­73f56558052b. Accessed March 10, 2020. ———. 1997. Code of Ethics. Ethics Codes Collection. Center for the Study of Ethics in the Professions, Illinois Institute of Technology. http://ethicscodescollection.org/detail/ f1baec7b-­321a-­47a0-­bc41-­a822b9859858. Accessed March 10, 2020. ———. 2006. Code of Ethics. Ethics Codes Collection. Center for the Study of Ethics in the Professions, Illinois Institute of Technology. http://ethicscodescollection.org/detail/6c90ab20-­ c581-­4efe-­92ed-­122998e49332. Accessed March 10, 2020. ———. 2013. Code of Ethics. Ethics Codes Collection. Center for the Study of Ethics in the Professions, Illinois Institute of Technology. http://ethicscodescollection.org/detail/18d63ae0-­ c50e-­4e33-­9ee0-­8bda038d2175. Accessed March 10, 2020. ———. 2017. Code of Ethics. Ethics Codes Collection. Center for the Study of Ethics in the Professions, Illinois Institute of Technology. http://ethicscodescollection.org/ detail/94bcd74e-­3d38-­44c4-­bdaa-­ffa7b498dffe. Accessed March 10, 2020. ———. 2018. “In support of strong encryption.” Resolution of IEEE Board of Directors. http:// globalpolicy.ieee.org/wp-­content/uploads/2018/06/IEEE18006.pdf. Accessed March 10, 2020. ———. 2019. Ethically aligned design, First Edition. https://standards.ieee.org/content/ieee-­ standards/en/industry-­connections/ec/autonomous-­systems.html. Accessed March 10, 2020. ———. 2020a. “History of IEEE.” https://www.ieee.org/about/ieee-history.html. Accessed January 9, 2020. ———. 2020b. IEEE Code of Ethics. https://www.ieee.org/about/corporate/governance/p7-­8. html. Accessed November 24, 2020. Kant, Immanuel. [1785] 2012. Groundwork of the metaphysics of morals. Cambridge, UK: Cambridge University Press. Layton, Edwin T. 1986. The revolt of the engineers. Social responsibility and the American engineering profession. Baltimore, MD: Johns Hopkins University Press. Pfatteicher, Sarah K.A. 2003. Depending on character: ASCE shapes its first code of ethics. Journal of Professional Issues in Engineering Education and Practice 129 (1): 21–31. Pugh, Emerson W. 2009. “Creating the IEEE Code of Ethics.” In 2009 IEEE Conference on the History of Technical Societies, Philadelphia, PA, 5–7 August 2009, 1–13. https://doi. org/10.1109/HTS.2009.5337855 Riley, Donna, and Yanna Lambrinidou. 2015. “Canons against Cannons? Social Justice and the Engineering Ethics Imaginary.” In 122nd ASEE Annual Conference and Exposition American Society for Engineering Education, Seattle, WA, 14–17 June 2015. https://doi. org/10.18260/p.23661. Riley, Donna M., Amy E. Slaton, and Joseph R. Herkert. 2015. “What is Gained by Articulating Non-canonical Engineering Ethics Canons?.” In 122nd ASEE Annual Conference and Exposition American Society for Engineering Education, Seattle, WA, 14–17 June. https://peer. asee.org/what-­is-­gained-­by-­articulating-­non-­canonical-­engineering-­ethics-­canons Roosevelt, Theodore. 1908. “Opening Address by the President.” Proceedings of a Conference of Governors, Washington, D.C., 13–15. The Evolution of the Conservation Movement, 1850–1920. http://lcweb4.loc.gov/cgi-bin/query/r?ammem/consrv:@field(DOCID+@ lit(amrvgvg16div19)), Accessed January 23, 2019. Russell, Kristen. 2020. “Board of Directors Approves Revisions to the IEEE Code of Ethics.” The Institute, 20 July, https://spectrum.ieee.org/the-­institute/ieee-­news/ board-­of-­directors-­approves-­revisions-­to-­the-­ieee-­code-­of-­ethics Stephan, Karl D. 2006. Notes for a history of the IEEE society on social implications of technology. IEEE Technology and Society Magazine 25 (4): 5–14.

8  Addressing Intelligent Systems and Ethical Design in the IEEE Code of Ethics

159

Stern, Arthur P. 1975. Sociotechnology IEEE code of ethics: In response to the members’ mandate, a set of guidelines on professional relationships is presented. IEEE Spectrum 12 (2): 65. Tang, Xiaofeng, and Dean Nieusma. 2017. Contextualizing the code: Ethical support and professional interests in the creation and institutionalization of the 1974 IEEE Code of Ethics. Engineering Studies 9 (3): 166–194. Vaughan, Diane. 1997. The challenger launch decision: Risky technology, culture, and deviance at NASA. Chicago: University of Chicago Press. Veblen, Thorstein. 1921. The engineers and the price system. New York: B.W. Huebsch. Greg Adamson  Associate Professor (Honorary), School of Computing and Information Systems, University of Melbourne, Australia; [email protected]. Greg Adamson is 2020 Chair of the IEEE Ethics and Member Conduct Committee, addressing unethical behavior among the institute’s 420,000 members. Interests include transparency in artificial intelligence, human dignity in cyberspace, and security of health technology. Joseph Herkert  Associate Professor Emeritus of Science, Technology and Society, North Carolina State University, USA; [email protected]. Joe Herkert works in the areas of engineering ethics, engineering ethics education, and the societal and ethical implications of emerging technologies.

Chapter 9

Technocracy, Public Optimism, and National Progress: Constructing Ethical Guidelines for Responsible Nano Research and Development in China Qin Zhu

Abstract  Three social images or forces have been shaping the construction of ethical guidelines for responsible nano research and development in China: technocracy, public optimism, and national progress. Efforts to assess the ethical implications of nanotechnology and integrate ethical considerations into national research and development policy have been exclusively dominated by technical experts (e.g., nano researchers, academicians, ethicists). A widely adopted ideology of technological optimism (e.g., “innovation is good”) encourages the public to invest the ethical governance of nanotechnology in experts and the state. The state itself has created a social image that nanotechnology is key to the rejuvenation of the Chinese nation in the global context. Such an image of national progress, in turn, reinforces the public optimism on emerging technologies. In this context, academic ethicists have played an active role in putting ethics on the national policy agenda and collaborating with scientists in constructing ethical guidelines for nano research and development. Keywords  Ethical governance · China · Social images · Technocracy · Public optimism · National progress

9.1  Introduction In the global context, China has recently become a powerhouse in nanotechnology research and development. Nanotechnology has been placed by Chinese technocratic policymakers at the top of their political agenda. For instance, in the report “The Outline of National Medium- and Long- Term Science and Technology Q. Zhu (*) Colorado School of Mines, Golden, CO, USA e-mail: [email protected] © Springer Nature Switzerland AG 2022 K. Laas et al. (eds.), Codes of Ethics and Ethical Guidelines, The International Library of Ethics, Law and Technology 23, https://doi.org/10.1007/978-3-030-86201-5_9

161

162

Q. Zhu

Development Plan (2006–2020)” drafted by the Ministry of Science and Technology, nanotechnology was identified as a critical area of focus. In 2011, the Chinese government created the National Committee for Direction and Coordination of Nanoscience and Nanotechnology Research. In 2012, China launched a Strategic Pioneering Program on Nanotechnology, funded at the rate of one billion yuan (152 million USD) over five years. In 2016, China ranked first globally in terms of the number of scientific papers and patents in nanotechnology. Emerging economies such as the BRICS countries (Brazil, Russia, India, China, and South Africa) have increased public funding in basic and applied research in nanosciences with the hope of closing the nanotechnology gap with leading industrialized countries (Falkner and Jaspers 2012). Thus, China’s early efforts to regulate nano research and development were very much focused on providing support for technology commercialization and economic growth. Nevertheless, it is worth noting that the fast-paced technological innovation has also brought daunting challenges to the limited regulatory capacity of this emerging economy to deal with nanotechnology risks. In 2009, Chinese scientists reported that in a paint factory using nanoparticles, seven Chinese women suffered permanent lung damage and two of them died after working for several months without proper protection (Lyn 2009). This tragedy is one of the earliest publicly reported cases in China in which nano particles contributed to the occupational safety and health issues of workers. It is also the most cited case by Chinese policymakers to advocate for more intensive nano safety studies and regulations. Compared to other emerging economies, since the early development stages of nanotechnology, China has been more active in discussing and drafting guidelines for regulating nano research. The Chinese Academy of Sciences (CAS) has played a leading role in constructing guidelines for responsible nano research. In 2007, regulations related to laboratory use of nanomaterials were first promulgated by CAS. During the same year, CAS drafted a national regulation for the Ministry of Health, one of the first such regulations of nanotechnology in the world (Qiu 2016). Scientists at CAS have been proactively collaborating with philosophers and social scientists to discuss and formulate guidelines for responsible nano research. In January 2009, scholars from leading Chinese and British research institutions such as CAS, Tsinghua University, Cambridge University, and the University of Manchester organized a bilateral workshop titled “Nano Regulation and Innovation: Issues in Humanities and Social Sciences” in Beijing. In November 2009, the Chinese Society for the Dialectics of Nature and the National Center for Nanoscience and Technology (led by CAS) organized the first national conference on nanoethics at the Dalian University of Technology (DUT). As discussed in later sections, Chinese efforts to construct guidelines for nano research and development are technocratic by nature, and the development of these guidelines is often led by knowledge experts. These experts include both scientists (e.g., CAS scientists such as the Director of the National Center for Nanoscience and Technology Dr. Yuliang Zhao) and humanists (e.g., philosophers and policy scholars represented by Dr. Guoyu Wang at Fudan University, formerly at DUT). However, scholars feel concerned about the limited role the public plays in the discourse on the ethical governance of

9  Technocracy, Public Optimism, and National Progress: Constructing Ethical…

163

nanotechnology. On October 19, at the 2018 Asian Nano Science and Technology Forum, Fudan University’s Center for Applied Ethics (led by Dr. Guoyu Wang) and the National Center for Nanoscience and Technology published the “Chinese Code of Conduct for Nanotechnology Research and Development (Draft Version).” This code of conduct can be viewed as a landmark for the institutionalization of China’s continuous efforts to construct ethical guidelines for nanotechnology. The Code of Conduct stipulates that safety is the first ethical principle. Other ethical canons include: respecting individual rights (e.g., informed consent, privacy, and health rights), responsibility, upholding scientific integrity, transparency, and sustainability (Wang and Ren 2018). To a large extent, this Code of Conduct seems to be comparable to Western professional codes of ethics, especially those in biomedical fields. This paper discusses three social images or forces that have been shaping the construction of ethical guidelines for responsible nano research and development in China: technocracy, public optimism, and national progress. Efforts to assess the ethical implications of nanotechnology and integrate ethical considerations into national research and development policy have been exclusively dominated by technical experts (e.g., nano researchers, academicians, ethicists). A widely adopted ideology of technological optimism (e.g., “innovation is good”) encourages the public to invest the ethical governance of nanotechnology in experts and the state. The state itself has created a social image that nanotechnology is key to the rejuvenation of the Chinese nation in the global context. That image of national progress, in turn, reinforces the public optimism in emerging technologies. In this context, academic ethicists have exercised an active role in putting ethics on the national policy agenda and collaborating with scientists in constructing ethical guidelines for nano research and development. This paper should be of particular interest to scholars and practitioners who are interested in the social and cultural contexts of technological governance and policy-making in China.

9.2  Technocracy Technocracy is the first social image or force that has been shaping the construction of regulation guidelines for responsible nano research and development in China. The regulation guidelines were often formulated under a centralized, technocratic policy architecture. Arguably, the technocratic culture of policymaking shapes and is shaped by China’s centralized political system. One example that demonstrates the technocratic nature of such a policymaking approach is the active involvement of technical experts, including academicians, nano researchers, and ethicists. These experts comprise nation-wide committees that oversee the regulation of nano projects across the whole country. In 2000, the National Steering Committee for Nanoscience and Nanotechnology (NSCNN) was created to coordinate and streamline all national nano research activities (Jarvis and Richmond 2011). NSCNN experts provide policy advice to the State Council, especially those departments which are crucial for providing funding, leading nation-wide research initiatives,

164

Q. Zhu

coordinating technology transfer programs, and facilitating public understanding of science. Institutions receiving this policy advice include CAS, the Chinese Academy of Engineering (CAE), the National Natural Science Foundation of China (NSFC), the Ministry of Science and Technology (MOST), the National Development and Reform Commission (NDRC), and the Ministry of Education (MOE) (Jarvis and Richmond 2011). Members of NSCNN are comprised of 21 scientists from universities, research institutes, and industry and 7 administrators from government agencies (Dalton-Brown 2012). Another example that showcases the technocratic nature of Chinese science and technology policymaking is that the State Council is comprised of technical ministries responsible for making policies, regulations, and standards for specific “technical” areas such as transportation, infrastructure, and telecommunication. Experts or “technocrats” often lead these ministries in different technical fields. However, such centralized, technocratic policymaking has been encountering crucial challenges in the regulation of environmental and health impacts of nano products, due to the “pervasive” nature of nanotechnology across many of the technical fields these ministries oversee. According to policy researchers Darryl S.  L. Jarvis and Noah Richmond (2011, 5), In the area of health and safety, China has a complex regime to manage and oversee the use and manufacture of chemical substances. This also applies to nanotechnology with multiple ministries, including the Ministries of Health, Communication, Public Security, and multiple administrative organs, including the State Food and Drug Administration (SFDA), State Administration of Standardization (SAS), State Administration of Work Safety (SAWS), and AQSIQ [the General Administration of Quality Supervision, Inspection, and Quarantine] – all having some control over Environment, Health and Safety (EHS) aspects of chemicals. This implies potential coordination difficulties and calls for inter-ministerial/ inter-agency coordination to improve the management of potential regulatory gaps.

As briefly described in the last section, CAS, its affiliated research institutes (e.g., the Institute of High Energy Physics), and its academicians have played a dominant role in the creation of the regulatory guidelines for nano research and development. CAS scientists have mainly served the role of policy advocates. Since its establishment in 1949 as the most prestigious academic research institute and a national academy, CAS has been playing a crucial role in providing advice for the central government (Li et al. 2016). Thus, it is unsurprising to see that CAS often plays a dual role in nanotechnology research and development in China: On the one hand, CAS conducts world-class, cutting-edge nano research. On the other hand, CAS leads nation-wide initiatives to discuss and to create guidelines for conducting reliable and responsible nano research. Arguably, many of these CAS scientists sometimes equate nanoethics with “toxicology” or “occupational safety” (DaltonBrown 2012) or what nano researchers would call environmental, health, and safety (EHS) impacts. Nano scientists in China, especially those from CAS, have played a dominant role in the risk discourse on nanotechnologies. Because of their advocacy, the government included issues such as nano safety in policy documents and designated special funds for conducting research on the environment, health, and safety

9  Technocracy, Public Optimism, and National Progress: Constructing Ethical…

165

concerns associated with nano products (Fautz et al. 2015). Traditional criticism of technocracy that portrays technocracy as “apolitical” (e.g., technocrats are not concerned with values and ethics in technological designs) may not apply to Chinese nano scientists. Chinese scientists have demonstrated their concerns about the reputation and social image of Chinese scientists as a professional community and an eagerness to develop professional autonomy in the scientific community. They have proactively engaged sociologists and philosophers in the discussion and formulation of a code of ethics for nano research (Fautz et al. 2015). Nevertheless, the technocratic approach to decision-making may also lead to some limitations, including the “technocratic fallacy” (Hansson 2004): it is a scientific issue of how dangerous X is. Therefore, scientists should decide whether or not X is acceptable. Thus, it is the professional responsibility of nano researchers that they uphold the safety, health, and welfare of the public. A fundamental assumption of technocratic policy-making is that public decisions on technological risk must be mainly or solely based on scientific information (Hansson 2004). The potential harmful effects of nanotechnology need to be avoided because their avoidance is crucial for the public acceptance of nanotechnology. As pointed out by former CAS president Dr. Chunli Bai, We must take great precautions against the potential harmful effects of nanotechnology— otherwise, there is likely to be strong public resistance, similar to the situation with GM crops these days. Any technology is a double-edged sword, and we must avoid the potential negative impact of nanotechnology on public health and the environment (Qiu 2016, 152).

In other words, the potentially harmful effects of emerging technologies threaten to create a strong resistance among the public to these technologies. If scientists can address these harmful effects, it might be easier for the public to accept these technologies. In this sense, it might also be possible that such technocratic policy-­ making has led to a culture in which nanoethics as a concept among CAS scientists should mainly be focused on technical concepts (e.g., toxicology and occupational safety) rather than broader social, cultural, political, or even religious implications of nanotechnology. Policy researchers have demonstrated their worry that technocratic policy-­ making in China often leads a collusion of interests between planners in the Central Government and nano scientists (Jarvis and Richmond 2011). Such collusion of interests limit scientists from engaging public groups in discussing guidelines for nano research. In January 2009, scholars from leading Chinese and British research institutions such as CAS, Tsinghua University, Cambridge University, the University of Manchester organized a bilateral workshop “Nano Regulation and Innovation: Issues in Humanities and Social Sciences” in Beijing. Some scholars were concerned about the lack of public participation in the regulation of nano research. To address this issue, philosopher Duoyi Fei advocated for a Chinese approach to public participation in nano policymaking. More specifically, Fei’s approach to public participation in nano policymaking did not fundamentally challenge the dominant role of government, experts, and expertise.

166

Q. Zhu

In contrast, at the least during the early stages of developing a Chinese approach to public participation, the public should mainly serve a complementary yet secondary role in the technical system. As pointed out by Fei, At the early developmental stages of public participation, [policymaking] should be set up based on the topics proposed by the government and experts. In contrast, the public participates in the assessment of what projects to fund. Later, policymakers and experts will check the validity and feasibility of public opinions (Bai 2009, 158–159).

Even though concern about public participation in nanotechnology governance has been raised for more than a decade, it is interesting to note that even the most recent “Chinese Code of Conduct for Nanotechnology Research and Development (Draft Version) (2018)” was first released at a technical forum where most attendees were either technical or social scientific experts. It is unclear when, where, and how public views concerning such a document will be solicited. Unfortunately, the author was not able to find the full text of the Code of Conduct, but only news reports about the sharing of this document at the 2018 Asian Nano Science and Technology Forum.

9.3  Public Optimism The concept “chuangxin (innovation, 创新)” in Chinese literally means “creating new (things).” “Xin (new, 新)” in both the Confucian tradition and the “Sinicized Marxism (zhongguohua de makesi zhuyi, 中国化的马克思主义)” has positive and normative connotations. The term “Sinicized Marxism” refers to the “Chinese version,” reinterpretation or recontextualization of Western Marxism, and it serves as the dominant ideology for the Communist Party of China. It is a sort of “translated” Marxism, which is characterized by a combination of orthodox Marxist doctrines and Confucian doctrines (Wang 2018). In both Confucianism and Sinicized Marxism, new things are supposed to be better than old things. In Chinese innovation discourse, progress and affluence are two core values (Fautz, et  al. 2015). Dalton-Brown (2012) compared the nanoethics environments in the EU and China and discovered that Chinese consumers demonstrated stronger confidence in new products, including new nano products. To the Chinese customers, “new equals good and ‘nano’ are pluses for marketeers” (Dalton-Brown 2012, 142). Public acceptance is crucial for companies who develop nanoproducts to justify the value of their products and stocks, request policy support from the government, and expand their market. Arguably, the public’s optimistic attitude toward technology has its own historical roots in Chinese scientism and Maoist ideology. Historians of science argue that during the May 4 Movement in 1919, the Chinese uncritically embraced Western science and technology without absorbing the spirit of skepticism and inquiry into the unknown (Simon and Goldman 1989). Such scientism cultivated an ideology among the Chinese that scientific represents advanced, whereas traditional means

9  Technocracy, Public Optimism, and National Progress: Constructing Ethical…

167

backward. In addition, Mao’s voluntarist philosophy believed that any task could be accomplished through sheer will. That Maoist voluntarist philosophy has been integrated into the ideological education of scientists and engineers and cultivated a perception of technology as a way of remolding the objective world among scientists and engineers (Zhu and Jesiek 2015). In this sense, the social image of public optimism could be linked to the social image of technocracy as described in the last section: the voluntarist view embedded in the ideological education of Chinese nano researchers may lead to a belief that technical expertise and experts can easily mitigate negative impacts of nanotechnology. In contrast to the precautionary principle prevalent in regulating nanotechnology in Europe, the optimism of the Chinse public has made Chinese scientists more proactive in making guidelines for nano research. Technology assessment inspired by the precautionary principle can be more conservative, and it emphasizes the rule that “one should never engage in a technological development or application unless it can be shown that this will not lead to large-scale disasters or catastrophes” (Engelhardt and Jotterand 2004, 303). Arguably, philosopher Guoyu Wang’s (2013) concept of the “ethics of feasibility,” which emphasizes the social acceptability of nanotechnology rather than the negative consequences of nanotechnology, can provide a philosophical account of the optimistic view of nanotechnology in China. Drawing on theories from Chinese philosophy, Wang (2013) advocates for a more contextual, responsive, dynamic, and holistic approach to regulating nanotechnology development. Wang’s strategic approach to nanotechnological assessment is also utilitarian by its nature. She feels worried that overly conservative, pessimistic, or even romanticized humanities philosophies of nanotechnology often assess nanotechnology in a too general way, overlooking the differences between various nanoproducts and the potential opportunities to engage scientists and develop nano products that can bring more benefits than risks. As argued by Wang, Not all nanotechnology has side effects or may bring potential ethical risks. Not all nanotechnologies are at the same level of maturity. Talking about ethical issues of nanotechnology in a general way will only cause antipathy and non-cooperation of many scientists and hinder the development of nanotechnology. Such a general discussion of nanotechnology also is not conducive to the development of a mature discussion of nanotechnology in society (Wang 2013, 372–373).

Zhang, Wang, and Lin (2016) conducted a survey (N = 741) that examined the public perceptions and attitudes toward nanotechnology in Dalian, China. Their results indicated an optimistic attitude toward nanotechnology prevalent among the public. Among the participants, 88.4% of them reported having heard of nanotechnology. 72.87% of the sample perceived benefits outweighing the risks: the most likely benefit was “nanotechnology may bring about new ways of life,” and the most likely risk was “nanotechnology could lead to an arms race between nations.” The majority of the sample (96.6%) indicated an attitude of support of nanotechnology. These findings are in stark contrast to earlier studies conducted by American scholars who discovered over half of participants heard nothing or little about nanotechnology and saw more risks than the participants in the Chinese study (Michelson and Rejeski 2009). Nevertheless, we have to be careful about the potential

168

Q. Zhu

implications we can draw from this study. For instance, could it be possible that the informants in Dalian only had some general knowledge about nanotechnology? If they could have been provided with more detailed information about nanoproducts before the survey was administered, might the public have had a more conservative view regarding nanotechnology? Such methodological concerns about this study are crucial because it is relevant to how the Chinese public can be effectively engaged in the discussion and formulation of ethical guidelines for nano research.

9.4  National Progress The social image of national progress is embodied in various stakeholders in nanotechnology innovation in China, including the government, industry, researchers, social scientists, and the public. The main driver of the innovation discourse is the government (Fautz, et al. 2015). The Communist Party derives its political legitimacy from its ability to guide the Chinese people to a prosperous life through the improvement of their livelihood and the creation of material wealth. The efforts of the Chinese government to actively publicize science and technology as a vital means for progress have contributed to the optimistic attitude toward nanotechnology among the public (Zhang et  al. 2016). Jarvis and Richmond (2011) further specify that the image of national progress is mainly reflected in China’s domestic economic transformation and its global economic competitiveness: The discourse framing China’s pursuit of nanotechnology is tied intimately to a national political agenda…Nanotechnology research and development thus operates under the burdens of expected national economic transformation, the delivery of substantial commercial outcomes, the development of a knowledge-based economy, a reduction in China's technology dependence, and the flagship of China's ambitions to assume global leadership in science and technology. Public perceptions of nanotechnology thus tend to be shaped in relation to sustaining and increasing national economic well-being, the prospective assumption of global leadership in cutting-edge technologies and science, and improving the quality of life for Chinese citizens.

Such an overtly nationalist view of nanotechnological development further reinforces the optimistic view toward nanotechnology among the public. Public perceptions of nanotechnology “tend to be celebrated in concert with a ‘rising China’ and as evidence of China’s destiny to assume a global leadership role” (Jarvis and Richmond 2011, 7). Compared to Europe, China has experienced more public trust in the government and scientists (Dalton-Brown 2012). When Western scholars might criticize the Chinese government for not including the public in policymaking, the Chinese public may dismiss the concern of these Western scholars and say, “Westerners are often worried too much, and we trust our government and scientists.” In return, the public trust of government and expertise may further reinforce scientists’ professional and technocratic responsibility. It may also make scientists think public participation is probably unnecessary if not harmful. According to an interview conducted by Javis and Richmond (2011), the head of the NSCNN

9  Technocracy, Public Optimism, and National Progress: Constructing Ethical…

169

explained that nanotechnology is highly technical and requires specialist knowledge and thus the public does not have sufficient technical capabilities to assess potential risks or participate in technical discussions. Involving the public in the ethical governance of nanotechnology thus can be viewed as a dangerous move because the public’s irrationality, misconceptions, and the lack of scientific training can be harmful to effective public discussions. In the report, “The Outline of National Medium- and Long-Term Science and Technology Development Plan (2006–2020)” drafted by the Ministry of Science and Technology in 2006, the idea of national progress was employed to justify the significance of developing nanotechnology in the global context: Nanoscience is one of the most promising areas where leap-frog development is possible since nanotechnologies had the potential to ‘give birth to a new technology revolution, and create huge development space for materials, information, green manufacturing, biology, and medicine in China.

Researchers, social scientists, and the public all extensively share a common vision and conception of national progress (Fautz et al. 2015). The social image of national progress mainly created by the government further shapes the research agenda of Chinese nano scientists as they contextualize the idea of national progress in their everyday laboratory work: they are eager to perform well in international science and technology competitions and aim to increase the global impact of their research (Fautz et al. 2015). As pointed out by Dr. Yuliang Zhao, the Director of the National Center for Nano Science and Technology, “this [carrying out more-extensive safety studies and improving regulatory oversight of synthetic nanomaterials] is the only way to maintain the competitiveness of China’s nanotechnology sector… we certainly do not want safety issues to become a trade barrier for nano-based products” (Qiu 2012). In the newly released “Chinese Code of Conduct for Nano Research and Development” of which Dr. Zhao was a principal rapporteur, safety is defined as the first ethical principle. Nevertheless, due to the lack of access to the full text of the report, we cannot know whether the justification for the safety principle could be connected to the image of national progress. It is interesting to see that the social image of national progress serves as a tool for framing the importance of safety concerns in developing nanotechnologies as the “occurrence of hazardous or anomalous situations could threaten progress and affluence” (Fautz et al. 2015, 134). Similar to researchers, Chinese companies all expect their improved products to be conducive to international competition. They have primarily benefited from the public’s appreciation of technological applications that symbolize societal progressiveness and individual wealth (Dalton-Brown 2012). Western scholars have expressed concerns about the tension between the increasingly crucial role played by China in global nanotechnology development and the general concern held by the Western public about the quality of Chinese-­ made nanoproducts (especially for Westerners who have been following scandals involving tainted pet food, toothpaste, children’s toys, and drugs) (Dalton-Brown 2012; Michaelson 2008).

170

Q. Zhu

It is not surprising to see that policies informed by the precautionary principle (prevalent in Western democracies) have not been dominant in China’s efforts to regulate nano research and development due to the dominant role of the national progress image. Developing nanotechnology is at the top of the Chinese nation’s political agenda, and nano research as a national project precedes any other research programs.

9.5  Conclusion To a large extent, the three social images that have contributed to the discourse on ethical guidelines for responsible nano research and development in China have interactive relationships between each other (see Fig. 9.1). First, technical experts and their expertise often play a crucial role in leading the discussion and construction of guidelines for nano research and development. Nano researchers, especially those CAS scientists, have been proactive in the formulation of regulation guidelines for nano research. Experts on the NSCNN collect scientific evidence and provide evidence-based policy recommendations to the State Council and its affiliated administrative ministries and funding agencies. Their active and dominant role has shaped the discourse on risk and ethics associated with nanotechnologies. Discussions on nanoethics have thus become mainly technical: ethical ramifications about nanoproducts are focused on toxicological characteristics of nanotechnologies and the environmental, health, and safety (EHS) impacts of these nanotechnologies. Non-technical impacts (such as social justice and equity issues) are often marginalized in these discussions. This technocratic approach to policymaking excludes the public from participating in the discussions on regulating nano research and development. Chinese technocrats may argue that public participation can be both unnecessary and disruptive for public discussions because the public often lacks sufficient professional training to understand many nanotechnological

Fig. 9.1 Interactive Relationships Between Three Social Images

Technocracy

Public Optimism

National Progress

9  Technocracy, Public Optimism, and National Progress: Constructing Ethical…

171

concepts. They believe that technocrats are comparable to elected politicians in the West, whose professional ideal is to use expertise to do good for the public. Second, in the Western context, public participation has widely been considered as a necessary or indispensable component for the ethical and policy governance of technologies. As discussed earlier, studies have shown that the Chinese public demonstrated an optimistic attitude toward nanoproducts. That optimistic attitude has made the Chinese public less interested or motivated to participate in the ethical governance of nanotechnology. The Chinese public has a more positive attitude toward the role of expertise in shaping society. Partly, that optimistic attitude toward technology has deep intellectual roots in scientism and Marxism in modern China. The voluntarist view embedded in the ideological education of Chinese scientists has created a belief among scientists and the public that the power of science lies in its power of remolding the objective world and mitigating the potential negative impacts of technologies. Nevertheless, it is interesting to see that several science communication and policy studies have shown that the Chinese public demonstrated a less optimistic attitude toward GMOs than nanotechnology (Dalton-Brown 2012). Third, the social image of national progress has shaped the discussion and construction of guidelines for nano research and development in China. Development in nanotechnology is not solely about economic returns or market values but also relevant to the true independence of the Chinese nation. Even though Chinese politicians and scientists have seen the economic value of nanoproducts, their primary justification for developing nanotechnology is that material wealth and improved livelihood can further enhance the legitimacy of the Communist Party. Similarly, the discourse on nano safety is shaped and framed by the concern that low quality (including poor safety performance) of nanoproducts can jeopardize the global competitiveness of these products. The view that perceives technology as a vital means for national progress has contributed to the optimistic attitude toward nanotechnology among the Chinese public. Besides the three social images, there are a couple of other points regarding the politics and cultures of implementing guidelines and standards in China that are worth further research. For instance, all three social images discussed here have become strong social forces to limit the public from participating in discussing broader, “non-technical” ramifications of nanoproducts. The interactive effects of the three social forces will make public understanding of nanotechnology more challenging. In addition, the centralized political structure of China allows the process of formulating regulations, guidelines, and standards to be effective. However, the implementation of these regulations, guidelines, and standards often rely on the local governments which sometimes can be much less effective, given the local governments often prioritize economic development over environmental, health, and safety impacts (Applebaum, et al. 2018). Chinese policymakers, researchers, and social scientists need to be aware of these political and cultural realities of China when they are learning from the Western approaches to the ethical and policy governance of technology and building their own ethical guidelines for responsible nano research and development.

172

Q. Zhu

References Applebaum, Richard D., Cong Cao, Xueying Han, Rachel Parker, and Denis Simon. 2018. Innovation in China: Challenging the global science and technology system. Cambridge: Polity Press. Bai, Jing. 2009. Nano regulation and innovation: The role of humanities and social sciences. Chinese Medical Ethics 22 (2): 158–160. Dalton-Brown, Sally. 2012. Global ethics and nanotechnology: A comparison of the nanoethics environments of the EU and China. Nano Ethics 6 (2): 137–150. Engelhardt, H. Tristram, and Fabrice Jotterand. 2004. The precautionary principle: A dialectical reconsideration. The Journal of Medicine and Philosophy 29 (3): 301–312. Falkner, Robert, and Nico Jaspers. 2012. Regulating nanotechnologies: Risk, uncertainty and the global governance gap. Global Environmental Politics 12 (1): 30–55. Fautz, Camilo, Torsten Fleischer, Ying Ma, Miao Liao, and Amit Kumar. 2015. Discourses on nanotechnology in Europe, China and India. In Science and technology governance and ethics: A global perspective from Europe, India and China, ed. Miltos Ladikas, Sachin Chaturvedi, Yandong Zhao, and Dirk Stemerding, 125–143. Cham: Springer. Hansson, Sven Ove. 2004. Fallacies of risk. Journal of Risk Research 7 (3): 353–360. Jarvis, Darryl S.L., and Noah Richmond. 2011. Regulations and governance of nanotechnology in China: Regulatory challenges and effectiveness. European Journal of Law and Technology 2 (3) Accessed March 9, 2019. http://ejlt.org/article/view/94/155. Li, Xiaoxuan, Kejia Yang, and Xiaoli Xiao. 2016. Scientific advice in China: The changing role of the Chinese Academy of Sciences. Palgrave Communications 1: 8. Accessed March 9, 2019. https://www.nature.com/articles/palcomms201645. Lyn, Tan Ee. 2009. Deaths, lung damage linked to nanoparticles in China. August 19. Accessed March 9, 2019. https://www.reuters.com/article/us-­china-­nanoparticles/ deaths-­lung-­damage-­linked-­to-­nanoparticles-­in-­china-­idUSTRE57I1Y720090819. Michaelson, E.S. 2008. Globalizaiton at the nano frontier: The future of nanotechnology policy in the United States, China, and India. Technology in Society 30 (3–4): 405–410. Michelson, Evan S., and David Rejeski. 2009. Transnational nanotechnology governance: A comparison of the US and China. In Nanotechnology & society: Current and emerging ethical issues, ed. Fritz Allhoff and Patrick Lin, 281–299. Dorchester: Springer. Qiu, Jane. 2012. Nano-safety studies urged in China. Nature 480: 350. ———. 2016. Nanotechnology development in China: Challenges and opportunities. National Science Review 3 (1): 148–152. Simon, Denis Fred, and Merle Goldman. 1989. Science and technology in post-Mao China. Cambridge, MA: Harvard University Press. Wang, Guoyu. 2013. On the feasibility of nanotechnology: A Chinese perspective. In Philosophy and engineering: Reflections on practice, principles and process, ed. Diane P. Michelfelder, Natasha McCarthy, and David E. Goldberg, 365–375. Cham: Springer. Wang, Ning. 2018. Translation and revolution in twentieth-century China. In The Routledge handbook of translation and politics, ed. Fruela Fernández and Jonathan Evans, 467–479. Abingdon: Routledge. Wang, Jiangao, and Hongxuan Ren. 2018. The Chinese code of conduct for nanotechnology research and development published. Science and Technology Daily. (October 19) Accessed March 31, 2019. http://www.stdaily.com/02/difangyaowen/2018-­10/19/content_722120.shtml. Zhang, Jing, Guoyu Wang, and Deming Lin. 2016. High support for nanotechnology in China: A case study in Dalian. Science and Public Policy 43 (1): 115–127. Zhu, Qin, and Brent Jesiek. 2015. Confucianism, Marxism, and Pragmatism: The intellectual contexts of engineering education in China. In International perspectives on engineering education: Engineering education and practice in context, ed. Steen Christensen, Christelle Didier, Andrew Jameson, Martin Meganck, Carl Mitcham, and Byron Newberry, 151–170. Dordrecht: Springer.

9  Technocracy, Public Optimism, and National Progress: Constructing Ethical…

173

Qin Zhu  Assistant Professor, Department of Humanities, Arts & Social Sciences, Affiliate Faculty in the Department of Engineering, Design & Society and the Robotics Graduate Program,  Colorado School of Mines, USA; [email protected]. Qin Zhu works primarily in two fields: engineering education and the ethics of technology. In particular, he is interested in issues such as (1) how values, ideologies, assumptions, and biases are communicated in both formal and informal engineering education settings; and (2) the ethical issues arising from the employment of emerging technologies such as robotics and crowdsourcing technologies in work, education, and society.

Chapter 10

The Historical Process and Challenges of Medical Ethics Codes in China Hengli Zhang, Siyu Sha, and Yuying Gao

Abstract  Chinese medical ethics have become standardized in recent years, transforming from focusing on individual, famous physicians’ concepts of medical ethics to a more unified understanding of professional medical ethics. In the areas of human embryonic stem cells and clinical medicine, codes of Chinese medical ethics are gradually improving and becoming an essential part of the internationalization of medical ethics. This paper systematically describes the background, formulation process, and challenges in the development of two sets of ethical guidelines, the “Ethical Guidelines for Human Embryonic Stem Cell Research” and the “Ethical Review of Biomedical Research Involving Human Beings”. The development of medical ethics in China should guide not only the development of biotechnology but also strengthen the ethical governance of the scientific, medical, and bioethics communities and the government agencies that oversee their work. Keywords  Chinese medical ethics · Ethical codes and guidelines · Embryonic stem cell research · Bioethics

10.1  Introduction Medical ethics in China, which originated from medical morality, developed quickly after feasible, effective principles and criteria were established, thus rendering research and innovation in the life sciences and medical technologies more prudent and standardized. This article reviews and reflects on the historical path that medical ethics in China has taken as well as the problems and challenges it has encountered. It also attempts to analyze the necessity of standardizing medical ethics in China to further and better its development.

H. Zhang (*) · S. Sha · Y. Gao Beijing Institute of Technology, Beijing, China e-mail: [email protected] © Springer Nature Switzerland AG 2022 K. Laas et al. (eds.), Codes of Ethics and Ethical Guidelines, The International Library of Ethics, Law and Technology 23, https://doi.org/10.1007/978-3-030-86201-5_10

175

176

H. Zhang et al.

The term “medical morality” refers to the virtues that doctors and practitioners should have in traditional Chinese society (Zhang and Li 2011). As time passed, these virtues became embedded in medical practices, and the following moral principles became increasingly prominent: saving life (Chen 2007), showing respect for life and patients (Li 2003), and taking the patients’ perspective (Chen 2007). For example, as Zhang Zhongjing, a famed doctor in the East Han Dynasty, stated, “Doctors first give treatment to their parents and seniors, then offer medical assistance to the poor, and lastly delve into medicine to get the essence on how to maintain good health” (1986). In Canon by Huangdi, another classic medical text, there is a clear statement about the value of life, “Life is the most valuable thing on earth and humans live it merely once” (Ryden 1997). Similarly, according to Sun Simiao, in the Tang Dynasty, “Human life is more precious than any other material possession” (Shanghai College of Traditional Medicine 1980). As can be seen, traditional medical morality operates on the principles of humanity, love, and respect, with more emphasis laid on a doctor’s responsibility for patients over profits. With the introduction of Western medicine into China as part of the transition from a traditional society to a modern one, China embarked on the modern path of medical practices (Gong 2019). In 1926, the Journal of Chinese Medicine published the “Code of Medical Ethics.” Written by the China Medical Association, this code included specific statements that made it clear that priority should continue to be given to humanity rather than the doctor’s financial benefits. This codification of the concept of humanity over economic benefits indicates that ethical thought in China was beginning to reflect similar bioethical principles as its international counterparts. These principles were already present in traditional Chinese medical practice but are now being adopted formally by the Chinese Medical Association. In June 1933, the Shanghai Guoguang Bookstore published the book Professional Ethics in Medicine authored by Song Guobin (1893–1956), a pioneer in the field of medical ethics in China. His book, the first specialized work in this field, signified China’s transition from traditional medical ethics to modern medical ethics by attempting to build an ethics system adapted to Chinese culture and traditional Chinese medical practice. The founding of a new China witnessed rapid development in medical ethics and a proliferation of publications and guidelines. For this article, we will focus on modern developments starting in the 1980s. In 1981, the first national conference in Shanghai opened the door for scientific study in this field. Two years later, the People’s Health Press published a book entitled, An Introduction of Medical Ethics. Finally, in 1987, Life Ethics by Qiu Renzong further promoted the field’s development. During this same period, China’s Ministry of Health issued a series of documents, including, Moral Standards for Medical Practitioners and Measures for Implementation (1988), Ethical Guidelines for Human Assisted Reproduction Technology and Human Sperm Bank (2004), and Measures for Ethical Review of Biomedical Research Involving Human Beings (Trial 2007). These documents enhanced research and study in medical ethics, offer useful guidelines for its growth, and promote the moral awareness of medical personnel. The 1988 Moral Standards outlines general requirements for medical practitioners, while the 2004 and 2007

10  The Historical Process and Challenges of Medical Ethics Codes in China

177

documents are far more technical in nature due to their narrower subject area. Taken together, these guidelines fostered healthy and sustainable growth in medical science as well as in medical practice in China. With the dramatic growth of the life sciences in the twentieth century, new biotechnology has been widely used in medical research, including research in human embryonic stem cells, neuroscience, and deep brain stimulation. This, in turn, quickened the pace of adoption of guidelines and improving committees that review the ethical conduct of medical research. These measures, combined with joint efforts by practitioners and relevant institutions, should establish a feasible and effective system that guides doctors and patients. Meanwhile, these guidelines have been amended following changes in actual practices to ensure their standardization, prudence, and scientific merit.

10.2  E  thical Guidelines for Human Embryonic Stem Cell Research Human embryonic stem cells, one of the frontiers of scientific research, are finding significant applications in medicine. Furthermore, the research on human embryonic stem cells is making a breakthrough in the world. For example, in China, the research team led by Dr. Deng Hongkui and Dr. Zhao Yang has made revolutionary achievements in pluripotent stem cells induced from mouse somatic cells by small-­ molecule compounds (Xiang et al. 2019). Their findings initiated an entirely new path for somatic reprogramming, one that may treat some severe sicknesses by regenerating damaged tissue. As a result, people expect more of human embryonic stem cells in research, believing that such cells should have broader applications. However, there is controversy concerning such issues as an embryo’s moral and legal status. Such controversies have created a need for a consensus on how to carry out stem cell research, especially human embryonic stem cell research.

10.2.1  D  ebate on Human Embryonic Stem Cell Research (Background of Guidelines) Human embryonic stem cells are not divided at the initial stages of human embryonic development. At the blastocyst stage, there are about fifty “inner cell groups,” i.e., cells with full differentiation function. They are further transformed into three embryonic layers: inner, middle, and outer, that eventually differentiate into tissues and organs, thus forming a complete organism. If the “inner cell mass” of the blastocyst stage is taken out and nurtured in  vitro, pluripotent embryonic stem cells (E.S. cells) can be obtained (Jin 2002). Research on mouse E.S. cells suggests that these cells can differentiate and develop into distinct cells and tissues induced by

178

H. Zhang et al.

specific growth factors in vitro (Xie 2000). Similarly, if human E.S. cells can be obtained, researchers can explore the rules and conditions for E.S. cells’ directional differentiation to different cells and tissues by changing the nurturing conditions in vitro, and later transfer them to patients to repair damaged cells, or even cultivate whole organs for transfer in vitro. This would be of great significance for patients with various cancers, immunodeficiency, and other diseases involving cells, tissues, and even organ necrosis. At present, human embryonic stem cells are derived in three ways. First, from excessive blastocysts produced during IVF treatment; second, from aborted fetuses; third, from embryos created by somatic cell nuclear transfer (SCNT) (Zhai 2001). In this third method, embryos can be generated from self-differentiated adult cells of patients through cloning technology. The human embryo experiment and cloning technology involved in these three processes for obtaining human embryonic stem cells have aroused widespread ethical controversy. The debate concerns two issues (Huang 2016). The first is whether human embryos should be used in medical research (Huang 2016). Are embryos regarded as human beings (or potential life) or as mere groups of cells for research purposes? Should human embryos be respected as human beings? When obtaining embryonic stem cells using the third method, the main ethical issue is whether stem cells obtained by therapeutic cloning should come from a patient’s own somatic cell nuclear transfer (SCNT). Admittedly, therapeutic cloning helps eliminate immune resistance because of identical DNA coding and gene-phenotype. However, some scientists are concerned that this technique’s widespread use might open the door for human cloning. Without special standards, humans would open a Pandora’s Box. British biologist Ian Wilmut, the father of sheep cloning, asserts that cloning human embryos is especially important. According to him, “…ethical problems do exist, and there are chances that embryos develop into a human. However, it could not be defined as human because it has not split out a nervous system. So I think embryonic cells can be used for therapeutic cloning” (Zhang 2000). However, many Christians insist that embryos should be counted as human life and that the use of human embryos is morally wrong. As Thomas Winning, the Archbishop of Glasgow’s Roman Catholic Church, put it, “It is morally wrong to extract stem cells from human embryos because it ruins a life” (Qiu 2001). In a meeting with President Bush of the United States in July 2001, Pope John Paul II echoed this sentiment. “There must be no funding for scientists who conduct embryonic stem cell research because they destroy lives and ethics” (Qui 2001). Despite continuous and widespread controversy, stem cell research continues to spread. For example, in 2001, the United Kingdom declared that early human embryo cloning is allowed for medical purposes and that in vitro human embryo research can be carried out within 14 days of embryo survival. In other countries, this practice is illegal. China agrees with the U.K.’s time limit, within 14 days (Chen and Hui 2018). Following international norms, the Chinese government has striven to help doctors and scientists become aware of ethical issues surrounding the use of human

10  The Historical Process and Challenges of Medical Ethics Codes in China

179

embryos in research by formulating ethical norms and legal provisions. China has ethical concerns about stem cell research and hopes to have the guiding principle of unified authority. China is committed to the development and improvement of guidelines in this area.

10.2.2  D  evelopment of Ethical Guidelines for Human Embryonic Stem Cell Research As stem cell research has developed, Western countries have formulated similar ethical and legal norms. These include the U.  K.’s Human Fertilization and Embryology Act of 1990, the German Embryo Protection Act of 1991, the French Bioethics Law of 1994, and the U.S. National Institutes of Health “Guidelines for Research Using Human Pluripotent Stem Cells” of 2000. The widespread use of these ethical norms in human embryonic stem-cell research, in combination with Western countries’ refusal to cooperate with scientists whose countries have no such norms, led more countries to establish their own ethical and legal guidelines. The ethical principles governing the Chinese use of human embryonic stem cells in research originated from the international dissemination of bioethics. Domestic ethical awareness has been dramatically enhanced by the promotion of relevant research and offering bioethics courses for university students. These developments have helped lay a foundation for China to formulate independent ethical norms for human embryonic stem-cell research (Zhai 2019). The formulation of ethical norms proceeds according to the following rules. First, ethical norms should refer to the relevant laws and documents issued by Western or international organizations. Second, the norms should probe the moral status of embryos and the principle of informed consent. Third, the norms should discuss the specific issues around the principle of informed consent, including the influence of Confucian traditional culture that values harmony in informed consent both in practice and its accurate understanding and execution. Following this method, the National Medical Ethics Committee and the Ethics Committee of the Southern Research Center of the National Human Genome1 proposed model norms. This proposal led to a series of actions: First, the Ethical Principles and Management Recommendations for Human Embryonic Stem Cell Research was jointly drafted by the Research Center of Applied Ethics of the Chinese Academy of Social Sciences, the Chinese Academy of Medical Sciences, the Bioethics Research Center of Peking Union Medical College, the Medical Department of Peking University, and the Ethics Committee of the Human Genome Project of China in 1999. After a draft proposal was finished, 1  The National Human Genome Southern Research Center is a new scientific research institute cosponsored by scientific research institutes of related fields in Shanghai. It is designed to do research into the human genome and explore applications. (The National Human Genome North Research Center is a national genome research institute approved by the Ministry of Science and Technology.)

180

H. Zhang et al.

it was reviewed by the Panel of Medical Ethics and finally submitted to the Ministry of Public Health. The Ethical Principles are to be followed in human embryonic stem cell research and allowable sources of human embryonic stem cells. No research is allowed without a license from the Ministry of Science and Technology or the Ministry of Health or by units below the provincial level. The Ethical Principles also asserts that violations should be punished. Second, another document, Ethical Guidelines for Human Embryonic Stem Cell Research (draft recommendation), was adopted in 2001 by the ethics committee of the Southern Research Center of the National Human Genome in Shanghai. It stipulated the ethical principles for human embryonic stem-cell research, including those of benevolence, respect, self-reliance, not-for-profit, avoiding harm, informed consent, prudence, and confidentiality. It also specified requirements for stem cell sources (Article 12) and principles for embryonic stem-cell research (Articles 13 and 14). Moreover, it emphasized that an ethics committee should review and supervise each study and that human embryonic stem-cell research should conform to the relevant international statutes, declarations, or guidelines promoting international cooperation in stem cell biotechnology. Third, the Ministry of Science and Technology and the Ministry of Health jointly formulated the Guiding Principles of Ethics for Human Embryonic Stem Cell Research on 24 December 2003. This established the corresponding ethical and legal norms to ensure the healthy and rapid development of human embryonic stem-­ cell research in biomedicine in China.

10.2.3  P  roblems with and Suggestions for the Ethical Guidelines for Human Embryonic Stem-Cell Research (Guidelines) The Guidelines, in general, abide by relevant international and domestic ethical norms, explicitly prohibiting human reproductive cloning and human embryo research for more than 14 days after conception. It also opposes the implantation of embryos into human and other animals’ reproductive systems and the combining of human germ cells with other germ cells. It also eliminates the sale of human gametes, fertilized eggs, embryos, and fetal tissues. Moreover, it emphasizes the principles of informed consent, informed choice, and privacy and advocates the establishment of ethics committees. These guidelines are of considerable significance to standardization and push forward the civilized development of human embryonic stem-cell research, but they also ignore some serious problems. Three issues are of significant concern in terms of the content of the guidelines. The first issue concerns the rigor of the guidelines. Technical terms such as “blastocyst”, “parthenogenetic blastocyst”, and “genetically modified blastocyst” are used repeatedly in the document without a clear definition. There are also some

10  The Historical Process and Challenges of Medical Ethics Codes in China

181

inconsistencies in the provisions. For example, Article 52 lists the sources of human embryonic stem cells without mentioning the technology of parthenogenesis and genetic modification. However, Article 63 adds that the technology of “parthenogenesis and genetic modification” has not been fully discussed and ethically examined. The second issue concerns the professionalism of the guidelines. “Ethics” is mentioned many times in the document, and Article 94 advocates the establishment of ethics committees. However, there is no mention of researchers in ethics. A 2019 literature review of Chinese medical ethics committees found a lack of ethical competence among its members, stemming from their limited exposure to ethics education and a lack of compulsory training for many members (Wang et al. 2019, 4638). With this background, ethics committees without ethicists are unlikely to apply theories and principles in their review. Therefore, it is difficult to identify and give a professional response to specific ethical issues. The third issue concerns the guidelines’ endorsement. There is no specification of penalties in the document, nor how to deal with the consequences of violating its provisions. These three problems also arise in implementing the Guidelines. First, there is the vague statement of the principle of informed consent, allowing China and the West to interpret informed consent in quite different ways. For example, when conducting informed consent and risk-benefit analysis in the U.S., according to the NIH Guidelines for Research Using Human Pluripotent Stem Cells, it is necessary to reveal the commercial potential of the research being performed. The guidelines also include statements that donors cannot benefit financially from the research, that the research does not necessarily provide medical benefits for donors, that donated embryos will not be transferred to a women’s uterus, etc. (NIH 2000). By contrast, according to Chinese Guidelines, researchers should notify the subjects of the anticipated object of the research, possible consequences, and risks they may face in being part of the research project in accurate, clear language before obtaining their consent and signature (and before the experiment). Such emphasis on basic

2  “Human embryonic stem cells for research can only be obtained in the following ways: (1) excess gametes or blastocysts during in vitro fertilization; (2) fetal cells that spontaneously or voluntarily choose to abort; (3) blastocysts and monozygotic blastocysts obtained by somatic cell nuclear transfer technology; or (4) germ cells that are voluntarily donated.”(Ethical Guidelines for Human Embryonic Stem Cell Research, Article 5). 3  “In carrying out research on human embryonic stem cells, the following three rules must be observed: (1) blastocysts obtained by in vitro fertilization, somatic cell nuclear transfer, parthenogenetic reproduction, or genetic modification shall not be nurtured for more than 14 days from the beginning of fertilization or nuclear transfer; (2) human blastocysts obtained according to the preceding paragraph shall not be implanted into the reproductive system of a human being or any other animal; (3)human germ cells shall not be combined with germ cells of other species.”(Ethical Guidelines for Human Embryonic Stem Cell Research, Article 6). 4  “The research institutes engaged in human embryonic stem cells should set up ethical committees composed of researchers and managers from biology, medicine, law or sociology. Their duties are to conduct comprehensive review, consultation, and supervision of the ethics and scientific merit of human embryonic stem cell research.” (Ethical Guidelines for Human Embryonic Stem Cell Research, Article 9).

182

H. Zhang et al.

content in the principle of informed consent fails to provide detailed and comprehensive regulations. Second, the Chinese Guidelines do not require diversity and professionalism in committee members. In 2000, the World Health Organization (WHO) proposed that the ethical review committee members should include individuals with a variety of backgrounds, including biomedical research, ethics, philosophy, law, sociology, and psychology (Hu et al. 2005). However, as mentioned above, the ethical review committee identified in Guidelines does not require an ethicist to be a member. Even the ethics committee did include an ethicist, it is unlikely to make accurate, ethical value judgments in its review, which would likely lead to the imbalance between the scientific review and ethical review of the proposed research. Third, the Guidelines suffer from the ineffective implementation of the national policy of decentralization. In the process of establishing and perfecting the mechanism of ethical review, “ethical governance,” an important concept introduced in biomedicine, has a wide range of applications. European experts emphasized that “governance” refers to cooperation, coordination, and consultation, not only between national organizations (such as government departments, municipalities, courts, etc.) but also among a large number of non-governmental organizations (scientific institutions, medical institutions, lawyers, academic journals, patient groups, etc.). It includes not only written rules but also informal working practices, mutual supervision among peers, etc. By contrast, the formulation of ethical management and supervision norms in China is relatively centralized in government departments, and non-governmental organizations do not take an active role. Therefore, by promoting the reform of government functions and implementing the policy of “release, management, [and] service (simple administration and decentralization, management, and service optimization),”5 the level of “ethical governance” in China would be improved. However, due to a lack of sound ethical supervision mechanisms for embryonic stem-cell research and a failure to transform government functions and execution, the Guidelines fail to strike the right balance between governance and decentralization. As a result, with the further deepening of government reform, it is of considerable significance for the cause of stem-cell research to establish and improve relevant ethical supervision systems, laws, and regulations. In summary, some parts of the Ethical Guidelines for Human Embryonic Stem Cell Research deserve further discussion. Even though a revised version has not yet been published, China’s research on the ethical issues of stem cells is still advancing. The related standards and policies that have been successively released improve how Chinese researchers handle ethical issues raised by the use of embryonic stem cells in research. Among the improvements in the Management of Clinical 5  The term “decentralization, management, service” is short for simplifying administration and decentralization, a combination between decentralization and management, and service optimization. “Decentralization” means simplifying administration and decentralizing power, lowering access threshold; “management” means innovating supervision and promoting fair competition; “service” means working efficiently and creating a favorable environment for the appropriate activities.

10  The Historical Process and Challenges of Medical Ethics Codes in China

183

Application of Medical Technology published by the Ministry of Health in March 2009, It prohibits the use of medical techniques such as cell cloning in clinical research for the time being. It ensures both the scientific nature of stem-cell research and the protection of patients’ rights and interests. In 2013, the Ministry of Health and the State Administration of Food and Drug Administration jointly issued Measures for the Management of Stem Cell Clinical Trial Research, Measures for the Management of Stem Cell Clinical Trial Research Base, and Guidelines for Quality Control and Pre-clinical Research of Stem Cell Preparations to regulate and guide stem cell research techniques with more caution. In the Standards of Ethical Review Committee for Human-related Clinical Research, to be released in 2019 by the Chinese Hospital Association, independent and detailed regulations are formulated for the ethical review of stem-cell clinical research to promote the rapid and healthy development of stem-cell technology in China.

10.3  E  thical Review of Biomedical Research Involving Human Beings With increasing demands for research innovations, new information, and technology advances in biomedical research continue to develop at a startling speed. Advances in assisted reproduction and embryonic stem-cell research allow humans to fight diseases more effectively. However, human beings are also confronted more than ever, with technical risks and ethical challenges. In such circumstances, ethical review is put on the agenda to standardize the ethics of bioresearch in various fields. Ethical review also aims at safeguarding the dignity, rights, safety, and welfare of all actual or future research participants, guaranteeing subjects’ safety in all possible senses. Furthermore, an effective mechanism is needed to resolve disputes, including specific guidelines and principles, to ensure the healthy development of biomedical research in China by providing strong policy support and theoretical foundation. In January 2007, the former Ministry of Health promulgated Measures for Ethical Review of Biomedical Research Involving Human Beings (referred to as Measures for Trial Implementation). It was officially approved after trials and gradual implementation for more than a decade. Since then, it has been continuously adjusted and improved in light of practical problems. A revised edition of Measures for Ethical Review of Biomedical Research Involving Human Beings was released in 2019 (Zhai 2019). The revised Measures, after a series of amendments in the past decade, is an institutionalized achievement made by the National Ethics Review Committee. It is designed to promote the overall development of China’s ethics review mechanism as well as to ensure the healthy growth of biomedical research in China in the long run.

184

H. Zhang et al.

10.3.1  Background of Measures (Trial Implementation) Rapid advances in the depth and breadth of China’s domestic biomedical research have led to a rapid increase in large-scale international cooperation. However, developed countries in the West attach much more importance to ethical review and ethical review committees than developing countries. For example, there are ethical and scientific standards for human subjects in biomedical research in the following documents: Helsinki Declaration (WMA 2018); International Ethical Guidelines for Biomedical Research Involving Human Subjects (CIOMS 2016); International Coordination Conference (ICH) Guidelines for Good Clinical Practice (GCP) (2019) and other international guidelines. According to a Chinese scientist, there is still a big gap between domestic practices and international standards in obtaining written informed consent, reporting adverse events, or the way ethical review committees review research proposals (Zhai 2004). Therefore, the international community6 maintains that institutes with whom they cooperate should operate on the same ethical principles that they follow. To keep up with global trends, some domestic medical research institutions have established their own ethical review committees and laid down ethical review standards. However, because the standards are far from uniform, this local initiative is not conducive to the development of reliable oversight mechanisms. The International Commission of Bioethics of UNESCO declared: Considering underdeveloped laws and supervision mechanisms in developing countries and weak awareness of self-protection of local people, international research projects funded by developed countries should not be allowed in these countries in contempt of their rights to truth. Moreover, what is prohibited in developed countries should also be prohibited there. (People’s Republic of China, State Health and Family Planning Commission, 2016)

However, due to ineffective enforcement in China, immoral incidents in biomedical research occur in which scientific interests override those of the human research subjects. The World Health Organization (WHO) also asserted in its Operational Guidelines for the Work of the Ethics Committee for the Review of Biomedical Research (2000), that “countries, units, and communities should strive to establish ethics committees and ethics review systems… The ethics committee needs both administrative and financial support.” Therefore, the Chinese National Ethics Committee decided to formulate unified, specific norms and guidelines for the development of biomedical research. In 2007, the Medical Science and Education Approach for Ethical Review of Biomedical Research Involving Human Beings (Trial Implementation) was released. It aims to enhance China’s ability to conduct an independent ethical review of biomedical research, as well as to establish China’s ethics review system and ethics committees. It also strives to promote health service and make a considerable contribution to the international community. 6  Medical research institutes in Western developed countries such as the United States National Institutes of Health (NIH) and the British Medical Research Council (MRC) etc.

10  The Historical Process and Challenges of Medical Ethics Codes in China

185

10.3.2  P  roblems with the Measures (Trial Implementation) and Amendment to It In 2007, the Medical Science and Education Ethical Review Measures for Biomedical Research Involving Human Beings (Trial Implementation) was formulated and published. The Measures (Trial Implementation) not only played a key role in standardizing biomedical research involving human beings in China but also greatly enhanced international cooperation in scientific research in this field. However, in the past ten years, some problems with the document have drawn wide attention and deserve further discussion. The Measures require constant updating to keep it rigorous enough. There are three significant problems: The first problem is concerned with legal validity. The Measures is a normative set of guidelines issued by the Ministry of Health and Education (formerly the Ministry of Health). It fails to function as a legal document. The second problem is the mechanism of ethical review in biomedical research that has been put into operation. Due to unbalanced economic conditions in different regions, the development of ethical review and people’s awareness of science and technology vary greatly. Therefore, in some areas, human research subjects do not have enough awareness of the ethical issues that might arise when they participate in a research study. Third, the statement of regulations remains too vague. The content of the Measures (Trial Implementation) focuses on three parts: the ethics committee, the review process, and supervision. Though it attempts to ensure protection and respect for people’s legitimate interests and rights, it is too general. For example, in the Measures (Trial Implementation), it is stated that the process of informed consent can be canceled on researchers’ demand (People’s Republic of China, Ministry of Health 2007, Article 10). However, no specific conditions are prescribed in which requests are allowed. To address these problems, the National Health and Family Planning Commission hosted a summit conference on the Measures on 30 September 2016. One outcome of the conference was the report Measures for Ethical Review of Biomedical Research Involving Human Beings (also referred to as the Measures), which would replace the 2007 Measures on 1 December 2016. The new version has the following advantages over the old one. First, its administration is escalated from a department-­ level to the ministry-level. Such a change, to some extent, enhances the mandatory and authoritative nature of the Measures, which contributes to clarifying legal responsibilities and better protecting the legitimate rights and interests of research subjects. Second, specific requirements are made for the ethics committee. Members are required to receive regular training on ethics in biomedical research and related laws as well as regulations. The Measures also propose that medical and health institutions should not conduct biomedical research involving human beings without first establishing ethics review committees (Article 7 and 10). Third, its contents are enriched. Five chapters in the old version become seven in the new one, 30 articles become 50, thus making

186

H. Zhang et al.

the content, regulations, and principles more specific. The new Measures also underscores the importance of consulting the subjects for informed consent and protecting their rights (Articles 38 and 39). For example, there are clear statements about circumstances under which exemption from informed consent is allowed. There are also requirements that subjects be notified of the objective, significance, and expected effect of the research project, potential risks and discomfort, and possible benefits (Article 20). Last, it imposes more stringent regulation and management on research. The Measures incorporates new chapters which are dedicated to legal liability and follow-up review. On the whole, Measures puts forward more detailed and rigorous criteria. It also gives a more accurate definition of “biomedical research involving human beings”: biomedical research involving human beings integrates the study of human psychology and behavior, data collection concerning human beings, biological samples, and research in public health. As stated by Wang Jinqian, department director of the Science and Education Division, National Health and Family Planning Commission, the Measures not only regulates the ethical review of biomedical research involving people but also clarifies the legal responsibilities of medical and health institutions and ethics committees at all levels (Tan 2016). It is both of tremendous theoretical significance and practical value as it serves to protect human life and health, maintain dignity and respect, safeguard the legitimate rights and interests of the subjects, and strengthen the development of medical research. Under the guidance of the Measures, biomedical research is striving to maximize positive effects in the hope that the integration of biotechnology and medical ethics would promote the healthy life of human beings. Medical associations have drawn up more elaborate regulations to implement Measures and provide practical guidance effectively. For example, in 2019, the Chinese Hospital Association is expected to issue a report entitled Standards for the Construction of Ethical Review Committee of Clinical Research Involving Human Beings (Zhai 2019). It lists concrete qualifications for an ethical review committee and its members engaged in clinical research involving human beings and points out a resolution to the new challenge in biomedical research, conflict of interest (China Hospital Association, Part II Chapter VII). The Standards protect subjects, maintains the relationship between doctor and patient, researchers and subjects, as well as the credibility of medical science. Nevertheless, topics like the conflict between researchers and subjects, scientific development and subjects protection, scientific review and ethical review, remain to be resolved in biomedical research (Zhai and Qiu 2019).

10.3.3  Characteristics of and Prospects for the Measures The past decade has witnessed significant improvements in the development of China’s biomedical research ethics review committees, which in turn tremendously contributes to the scientific and ethical achievements of biomedical research. Once criticized by the international community for weak ethical awareness and poor

10  The Historical Process and Challenges of Medical Ethics Codes in China

187

review methods, the ethical review mechanism for biomedical research in China is now developing rapidly with unique characteristics. Ethical review committees and ethical review mechanisms are established to make possible independent review and international cooperation. Despite its late appearance, compared with international standards, the Measures has a unique binding force. Rooted in its cultural and social context, it provides a background against which the mechanism will be continuously updated to better serve the country. It is characterized by the following two features: First, the Measures highlights the leading agency. The ethics committee in the leading agency oversees the review and confirms the review outcome of other participating agencies. Ethical committees in these institutions must conduct a timely review of their research and give feedback to the leading agencies (Article 29). All the ethics committees are equal and independent of each other. There is no counterpart in the United States (Li 2016).7 The Measures stresses that ethical review should be independent, impartial, and fair (Article 17). However, in practice, in a highly centralized country such as China, the hierarchy helps strengthen the effectiveness and efficiency of the review process. Second, there is a follow-up ethical review (Article 27). This is one feature that separates the Measures from other relevant ethical standards. Each ethics committee is required to take the initiative to conduct a follow-up review with no fewer than two members, to guarantee the quality of the original review. As a result, the Ethical Review Committee of Biomedical Research in China is especially alert to the problems in their follow-up review and give timely feedback. In this way, a more scientific and professional mechanism of ethical review is on the horizon in China. Although the ethical codes in biomedical research have been established, some researchers, driven by fame and profits, disregard these regulations. Therefore, continued efforts are required to enforce higher ethical principles and standards. Biomedical research in China must address the relationship between science, technology, and medical ethics, to curb researchers’ desire for fame and fortune. More importantly, it is necessary to make reference to other disciplines to formulate more objective, meticulous, fair, and scientific, ethical principles and standards acknowledged by the scientific community. The article published in Nature by renowned bioethicists Lei Ruipeng, Zhai Xiaomei, Zhu Wei, and Qiu Renzong (2019) puts forward four suggestions for the healthy development of biomedical ethical review: 1. Government should collaborate with the scientific community in general and bioethicists in particular to establish clearer rules and regulations to govern the use of emerging technology that could be prone to abuse, such as gene editing, stem cells, and so on, and impose severe penalties on violators.

7  There are commercialized central ethics committees in the United States that can review projects from other hospitals. Different from the ethics committees of leading institutions in China, such centralized ethics committees generally use their own ethics committees in a hospitals in United States rather than a national ethics committee. If a central ethics committee is needed, the approval of the ethics committee of the hospital must be obtained in advance.

188

H. Zhang et al.

2. Monitor. The National Health Commission and other relevant organizations must monitor all gene-editing centers and medical institutions conducting in vitro fertilization operations. They should also assess whether ethics approval and procedures are adequate, whether the use of eggs and embryos is in accordance with the Regulations on Human Assisted Reproduction; and whether any other genetically edited embryos have been transferred into a person’s uterus. 3. Register. There should be a national registry dedicated to clinical trials involving such technology. Scientists should document ethics review and approval and list by name all participating scientists and institutions. A government certification system is needed, in which only people with appropriate training can qualify to serve on the ethics review committee. 4. Inform. Research institutions such as the Chinese Academy of Sciences and the Chinese Academy of Medical Sciences could disseminate the relevant rules and regulations for emerging technologies. Participants in experiments should be well-informed, and researchers should abide by the “ethical red line” in biomedical research.

10.4  Conclusion Medical ethics in China started relatively late. It has been only around 20  years since bioethics was established in China. Despite the existence of ethical review committees and ethical standards, some researchers seek immediate profits and fame in violation of these regulations. Such incidents have pushed China to take a more serious attitude towards the standards of scientific and ethical governance. It is necessary to conduct a comprehensive examination and reflection to push for continuous improvement in medical research ethics review, medical treatment, and medical decision-making in China. A better ethical review mechanism would promote the integration of medicine, science, technology, and ethics and contribute to the development of biomedical research in China. Acknowledgments  Sincere  gratitude to Zhai Xiaomei, Professor, Peking Union Medical College for receiving interviews during her busy schedule and providing us with a lot of useful information. Thanks to Bai Jing to help with professional knowledge. Thanks to Chen Qi, Beijing University of Technology, for collecting information. Sincere thanks to Michael Davis, Elisabeth Hildt, and Kelly Laas for their helpful proposals, as well as Kelly Laas for her patient and meticulous modifications.

10  The Historical Process and Challenges of Medical Ethics Codes in China

189

References Chen, Minghua. 2007. On the modern value of Chinese traditional medical ethics. Chinese Medical Ethics: 2007–2005. Chen, Rui, and Zhiqiang Hu. 2018. Acceptance and adjustment: Spread of ethics of human embryonic stem cell research in China. Journal of Engineering Studies 10 (5): 518–526. Council for International Organizations of Medical Sciences (CIOMS). 2016. International guidelines for health-related research involving humans. Geneva: Council for International Organizations of Medical Sciences. https://cioms.ch/wp-­content/uploads/2017/01/WEB-­ CIOMS-­EthicalGuidelines.pdf. Last viewed 19 January 2020. Gong, Fuqing. 2019. Medical ethics. Beijing: Science Press. Hu, Qingli, Chen Renbiao, Mingxian Shen, and Qiu Xiangxing. 2005. Recommendations for the establishment of a National Bioethics Committee. Chinese Medical Ethics 18 (2): 25–26. Huang, Xiaoru. 2016. Ethical controversy in stem cell research and its impact on policy. Journal of Dialectics of Nature 38 (2): 93–98. International Council for Harmonisation of Technical Requirements for Pharmaceuticals for Human Use. 2019 ICH Guidelines for Good Clinical Practice E6(R2). https://www.ich.org/ page/ich-­guidelines Jin, Shidai. 2002. The research on human embryonic stem cells and its ethical debate. Acta Universitatis Medicnalis Nanjing 2002-2: 112–114. Lei, Ruipeng, Zhai Xiaomei, Zhu Wei, and Qiu Renzong. 2019. Reboot ethics governance in China. Nature. 08 May 2019. (雷瑞鹏、翟晓梅、朱伟、邱仁宗于《Nature》发表文章网 址) https://www.nature.com/articles/d41586-­019-­01408-­y. Last viewed 10 January 2020. Li, Haiyan. 2003. Confucian ethics and traditional morality. Journal of Wuhan University of Science and Technology (Social Science Edition) 4: 34–38. Li, Bin. 2016. Interpretation of ‘Ethical Review Measures for Biomedical Research Involving Human Beings’. Medical Economics Journal (24 December). People’s Republic of China, Scientific Education Committee of the Health Planning Commission. 2007. Ethical Review Measures for Biomedical Research Involving Human Beings. (卫 科教.《涉及人的生物医学研究伦理审查办法(试行)》) http://yxky.fudan.edu.cn/cc/ce/ c6346a52430/page.htm People’s Republic of China, State Health, and Family Planning Commission. 2016. Measures for Ethical Review of Biomedical Research Involving Human Beings. (中华人民共和国国家卫 生和计划生育委员会令.《涉及人的生物医学研究伦理审查办法》) http://www.gov.cn/ gongbao/content/2017/content_5227817.htm Qiu, Xiangxing, Renbiao Gao, and Mingxian Shen. 2001. Studies on human stem cells and some related ethical issues. Medicine and Philosophy 22 (10): 54–58. Ryden, Edmund. 1997. The yellow Emperor’s four canons: A literary study and edition of the text from Mawangdui. Taibei: Guang qi chu ban she, Li shi xue she lian he fa xing. Tan, Jia, and Tianxiu Wang. 2016. Ethical review: Supporting flexible ethics with rigid system. Health News (16 December) https://kns.cnki.net/KCMS/detail/detail.aspx?dbcode=CCND&dbname=CCNDLAST2017&filename=JIKA201612160050&v=MTcxMTFLdWhkaG5qOThU bmpxcXhkRWVNT1VLcmlmWmVadkZDbnRVN3pMSTEwVUx5VEFiN0c0SDlmTnJZNURaT3NLREJO. United States, National Institutes of Health. 2000. Guidelines for Research Using Human Pluripotent Stem Cells. 65 FR 51975. Wang, Zhu-Heng, Zhou Guan Hua, Sun Li-Ping, Gang Jun. 2019. “Challenges in the ethics review process of clinical scientific research projects in China. Journal of International Medical Ethics 47(10):4636–4643. doi: https://doi.org/10.1177/0300060519863539. World Medical Association. 2018. WMA declaration of Helsinki: Ethical principles for medical research involving human subjects. https://www.wma.net/policies-­post/wma-­declaration-­of-­ helsinki-­ethical-­principles-­for-­medical-­research-­involving-­human-­subjects/. Last viewed 17 January 2020.

190

H. Zhang et al.

Xiang, Chengang, Du Yuanyuan, Meng Gaofan, Yi Liew Soon, Sun Shicheng, Song Nan, Zhang Xiaonan, Xiao Yiwei, Jie Wang, Yi Zhigang, Liu Yifang, Bingqing Xie, Min Wu, Shu Jun, Sun Da, Jia Jun, Zhen Liang, Sun Dong, Huang Yanxiang, Shi Yan, Jun Xu, Fengmin Lu, Li Cheng, Xiang Kuanhui, Yuan Zhenghong, Shichun Lu, and Hongkui Deng. 2019. Long-­ term functional maintenance of primary human hepatocytes in  vitro. Science 364 (6438): 399–402. https://doi.org/10.1126/science.aau7307. https://science.sciencemag.org/content/ sci/364/6438/399.full.pdf. Xie, Shusheng. 2000. Human genome project and medical model. Medicine and Philosophy 21 (9): 20–21. Zhai, Xiaomei. 2001. Stem cell research and ethical issues. Medicine and Philosophy 22 (6): 15–17. Zhai, Xiaomei. 2004. Institutionalization and capacity building of IRB. China Medical News (11 July) http://xueshu.baidu.com/usercenter/paper/show?paperid=ec1fa04de1ae6df10eac38221 5e12fe3&site=xueshu_se. Zhai, Xiaomei, and Qiu Renzong. 2019. “Beginning with ‘Plasmodium Treatment of Cancer’, Biomedical Research Faces These Three New Challenges.” Health News. 25 February. http:// dy.163.com/v2/article/detail/E8SM24KA05149LPQ.html. Last viewed 10 January, 2020. Zhang, Xiao. 2000. Is it a medical revolution or an ethical crisis? Guangming Daily (21 August). Zhang, Yun-fei, and Li Hongwen. 2011. From medical morality to medical professionalism: The transformation of traditional medical morality. Medicine and Philosophy 20: 11–14. Zhang, Zhongjing, and Xiwen Luo. 1986. Treatise of febrile diseases caused by cold (Shanghan Lun). Beijing, China: New World Press. Hengli Zhang Professor, Department of Marxism Studies, Beihang University, China; [email protected]. Hengli Zhang works primarily on engineering ethics. Research focus is on the education of engineering ethics and comparative studies of the history of engineering ethics between China and the US.  

Siyu Sha Master Candidate, Department of Marxism Studies, Beijing University of Technology,China; [email protected]. Siyu Sha primarily studies engineering ethics, including the codes of engineering ethics and the professional responsibility of engineers.  

Yuying Gao Assistant Professor, College of Humanities and Social Sciences, Beijing University of Technology, China; [email protected]. Yuying Gao works primarily on foreign language teaching and learning. Research focus is on CLIL (content and language integrated learning), taskbased teaching, development of Chinese EFL (English as a foreign language) learners’ critical thinking and intercultural communication.  

Part III

Introduction: New Approaches to Ethics Codes: Changing Purposes, Differing Views

As discussed in the previous parts, ethical codes and guidelines for practitioners can have varying levels of impact on practitioners—setting expectations for professional practice, educating new members in the principles necessary to their work, and helping professionals justify and defend their ethical decisions. Issues of language, practicality, and oversight are critical in their development and the involvement of various stakeholders (including the public) to ensure that the guidelines do not overlook key ethical issues, such as social justice or resources. Part III of this collected volume takes a step back from specific ethics codes and guidelines and seeks to put the role of codes of ethics in society into perspective. How impactful are codes in shaping the general public’s view of professional ethics? Are codes of ethics, as they are currently written, as helpful as they might be? And what, if any, role might codes of ethics play in the un-professional part of our lives? To explore an example of how ethics codes are seen and utilized outside of professional contexts, Laas, Hildt, and Wu conducted a study of mentions of codes of ethics on the social media platform Twitter. Are ethics codes used to justify or critique the specific actions of professionals? Do professionals and professional organizations use Twitter to exchange information about ethical practices? Or do ethics codes mean something different in this more public context? In the end, the authors found an unexpected variety of stakeholders using codes of ethics to promulgate their views strategically. The study highlights the role social media could play in reaching a larger population of professionals and engaging them in discussions about ethics, and how this might lead to ethics codes having a more significant role in society. Dennis Cooley’s Chap. 12, “The Technology’s Fine; It’s the Code of Professional Ethics that Needs Changing,” questions the effectiveness of ethics codes as they are currently written. As ethics codes are by their very nature imbued with their authors’ principles and values, these are the very values that are enshrined by these codes. In instances when these professional leaders may be ill-equipped to evaluate their profession’s handling of emerging technologies, apparent problems arise. Cooley suggests that ethics codes could be revised using moral psychology to help develop

192 III  Introduction: New Approaches to Ethics Codes: Changing Purposes, Differing Views

more straightforward, more automatic codes to avoid this pitfall. These simpler, pragmatic codes would draw on universal moral values, be easy to understand and use, and potentially invite greater collaboration between professionals and ethicists in handling extremely complex cases such as those involving new and emerging technologies. The concluding chapter steps away from professional codes of ethics and examines how such guidelines can effectively help shape our personal lives. Drawing on personal experience, Briggle discusses how the simple guideline “leave no trace,” a principle well-known to hikers, campers, and frequenters of U.S. National Parks, can be expanded to include the experiential trace that encounters with the natural environment can leave with us. Drawing from environmental philosophy and the philosophy of technology, the author discusses how such an ethic might encompass the moral principles inherent in an ethical life rather than just developing a systematic or comprehensive code of ethics.

Chapter 11

Mentions of Ethics Codes in Social Media: A Twitter Analysis Kelly Laas, Elisabeth Hildt, and Ying Wu

Abstract  Ethics codes and ethical guidelines are an established way of installing standards in professions, science, technology, and business. They help institutions and organizations address emerging issues, regulate practice-specific contexts, provide support, and are seen as helpful resources for professional-specific teaching. When it comes to the broader role of ethics codes in society and to the question of how ethics codes are seen and perceived outside professional contexts, the picture is much less clear, however. In order to find out about the broader societal role of ethics codes-related topics, we analyzed mentions of ethics codes on the social media site Twitter between June 2016 and May 2017. This chapter will detail the results of the study, which examined the frequency, content, and role of tweets that contain the search phrases “ethics code,” “code of ethics,” “professional code,” and their plural versions. We used the Twitter streaming application programming interface (API) and STACK to retrieve the tweets. It turned out that by far, the most often used term is “code of ethics,” with an overall frequency of around 83,000. Topics discussed in the tweets centered around ethical issues in political journalism, politics, media, and sports. While we had assumed we would find an ongoing illuminating conversation between professionals from all kinds of fields that would give us some hints on the current role of ethics codes in an evolving technology-relying society; instead, we found a much more diverse conglomerate of stakeholders partly using ethics codes-­ related tweets to promulgate their views strategically. Keywords  Social media · Twitter · Code of ethics · Ethics code · Professional ethics · Media ethics · Journalism ethics

K. Laas (*) · E. Hildt Ilinois Institute of Technology, Chicago, IL, USA e-mail: [email protected]; [email protected] Y. Wu Zenreach Inc., San Francisco, CA, USA e-mail: [email protected] © Springer Nature Switzerland AG 2022 K. Laas et al. (eds.), Codes of Ethics and Ethical Guidelines, The International Library of Ethics, Law and Technology 23, https://doi.org/10.1007/978-3-030-86201-5_11

193

194

K. Laas et al.

11.1  Introduction Ethics codes and ethical guidelines are an established way of installing standards in professions, science, technology, and business. They help institutions and organizations address emerging issues, regulate practice-specific contexts, provide support, and are seen as helpful resources for professional-specific teaching. While there is considerable scholarly work on the role of ethics codes and ethical guidelines in professions, science, technology, business, and education (Davis 1997; Frankel 1989; Mele and Schepers 2013), we still know little about the social contexts in which code of ethics and ethical guidelines are discussed outside of professional practice, science, technology, and ethics education. When it comes to the broader role of ethics codes in society and how ethics codes are perceived outside professional contexts, the picture is not very clear. In order to find out about the broader societal role of ethics codes-related topics and how these topics are being used to justify specific ideas or professional practices, we conducted an analysis of mentions of codes of ethics on the social media site Twitter. We were hoping that this study would also allow us to find new codes of ethics being developed by professional associations, businesses, and other institutions, discover the extent to which codes are being discussed on this social media platform, and learn how existing codes of ethics change in view of significant discussions in the various fields. The social media site Twitter seems to be a good source of data about the role of ethics codes in society. In 2016, Twitter had about 319 million active users who tweeted about 500 million Twitter messages (tweets) per day (Twitter 2016). With something like 22% of U.S. adults using the site regularly (Perrin and Anderson 2019), Twitter has become a source of data for scholars tracking public conversations on everything from the 2016 presidential elections to the anti-vaccine movement and other controversial topics. Social media platforms facilitate conversations and create connections between users with common interests. Therefore, it seems reasonable to assume that discussions around ethics codes might also occur in this medium. In growing numbers, professionals and scholars are turning to social media outlets like Twitter to reach audiences beyond their own company, professional society, and geographical area to engage with colleagues and a wider audience (Collins et  al. 2016). Twitter itself has been one of the most popular platforms among academics and helps their work reach non-academic audiences (Mohammadi et al. 2018). For this reason, we chose to focus on this social media platform for our study. The following questions shaped our approach: What roles do ethics codes have in social media? Which ethics codes are talked about? How do professionals from various fields discuss ethics codes on social media? How do they see the current role of ethics codes in an evolving technology-relying society? Are there new topics and new contexts in which ethics codes are currently being discussed? What is the broader societal role of ethics codes and ethics-codes-related topics?

11  Mentions of Ethics Codes in Social Media: A Twitter Analysis

195

11.2  Methods: Data Collection and Processing The data was collected through Twitter’s Public Streaming Application Programming Interface (API) from June 23, 2016, to May 4, 2017. The project team used this streaming API to request tweets that contained the phrases “code of ethics,” “codes of ethics,” “ethics code,” “ethics codes,” “professional code,” and “professional codes.” All data collected was publicly available. No private tweets, direct messages, or other confidential data were collected during the duration of this study, as laid out in Twitter’s Developer Agreement and Policy (2018). The streaming API allows researchers to keep a connection active for a specified amount of time and collects any public data that contains the terms of interest. The software toolkit called STACKS (Social Media Tracker, Analyzer, and Collector Toolkit at Syracuse), developed in 2014 by Syracuse University, allowed the project team to collect a filtered stream of data and process and store data from Twitter using Amazon Web Services. Tweets were collected using a data extraction template that recorded the publicly available biography of the user, his or her country of origin, and the verbatim text of the tweets posted by the user. In some cases, further demographic information was determined using a combination of online biographies, links to outside websites, and user names. At the time of this study, tweets could contain up to 140 characters, though this limit changed to 280 characters in November of 2017 (Rosen 2017). Tweets could also include links to outside web content, hashtags, and embedded images and videos. Users can also interact with other users by referring to their username in a tweet and retweeting another user’s tweet. The research team sought formal confirmation by their institutional review board (IRB) that no IRB review or approval was needed as this type of research was not considered human subjects research. Even though all tweets gathered through this study were publicly available, the project team decided to take a precautionary approach and avoid mentioning the user names or Twitter handles and direct quotations from individual Twitter users in the publication of this data. This prevents potential privacy breaches or harm that may result from individual users and their tweets being identified. We took this measure because this research study uses tweets in a different context than initially intended by the users. We do not know whether the users’ consent to publish the study results.1 Two exceptions were made, however. When talking about tweets by media companies (see most influential users), we included detailed information on the company’s name and the respective tweets. We also did not anonymize the Twitter account POTUS, which the Trump administration used starting in January of 2017 after President Trump’s inauguration, as this is a highly public office and prominent account.

1  For more information on privacy issues in social media research, see Association of Internet Researchers 2012; Taylor & Pagliari 2018; Samuel et al. 2018; Moreno 2013.

196

K. Laas et al.

As this study was completed in 2017, the research protocol for this study did not fall under the European Union’s General Data Protection Regulation that was implemented in May of 2018. Under this regulation, studies that collect tweets which may contain personal data are now urged to treat all data gathered using this kind of automated harvesting as potentially highly sensitive/or at the GDPR’s “special category” at the point of collection. Appropriate protocols need to be in place to address this issue (Information Comissioner’s Office 2019).

11.3  Results 11.3.1  Tweets Collected Compared to the aggregate 500 million tweets per day worldwide, there were relatively few tweets using the phrases “code of ethics,” “codes of ethics,” “ethics code,” “ethics codes,” “professional code,” and “professional codes” during the ten and a half months of the study (see Fig. 11.1). The phrase “code of ethics” had the highest number of mentions (83,204), followed by “ethics code” (6140) and professional code (1213). The number of tweets collected using the plural versions of the search phrases was relatively minimal. The total numbers of tweets indicated in Fig. 11.1 include unique tweets and tweets retweeted by other users. The term “unique tweets” refers to tweets posted by the original, unique user. These tweets can be once or repeatedly by the user, though the tweets are only counted once in this study unless noted otherwise. For this analysis, the term “unique users” refers to Twitter users who wrote tweets involving one of the search phrases, “mentioned users” refers to users who had their user names mentioned in one or more tweets involving one of the search phrases, or whose original tweets were retweeted (see Fig. 11.2). Since among the searched phrases used, “code of ethics” is by far the most often mentioned, in what follows, we confine our analysis to “code of ethics.” This phrase appeared in a total of 83,204 tweets, of which 58,139 were unique tweets and 25,065 retweets. A total of 38,839 unique users tweeted this phrase, and a total of 16,928 unique mentioned users were seen in these tweets. Fig. 11.1  Total number of tweets collected during the study duration

Keyword in Tweets

Total number of tweets collected

code of ethics

83,204

codes of ethics

952

ethics code

6,140

ethics codes

327

professional code

1213

professional codes

201

197

11  Mentions of Ethics Codes in Social Media: A Twitter Analysis Phrase

Total Tweets

Code of ethics

83,204

Ethics code

6,140

Professional code

1,213

Unique Tweets

Unique Users

Unique Mentioned Users

58,139

38,839

16,928

5,024

4,706

1,174

1,021

1059

385

Fig. 11.2  Tweets collected for phrases being studied

What follows now is a closer look at the kinds of users appearing with the search phrase “code of ethics” and tweet spikes that capture a surge of tweets around the search phrase.

11.3.2  User Characterization In analyzing Twitter users, we distinguish between three types of top users in the context of the use of the phrase “code of ethics”: • Most influential users - measured by users with the highest number of followers who tweeted about “code of ethics”; • Most active users - measured by the number of tweets they tweeted using the phrase “code of ethics”; • Most mentioned users - measured by the number of times their Twitter handle was mentioned in other users’ tweets and retweets containing the phrase “code of ethics.” 11.3.2.1  Most Influential Users By far the most influential users producing tweets with the term “code of ethics” were news outlets such as the British Broadcasting Commission (BBC, which has three Twitter handles, BBC News, BBC World, and BBC Breaking), the Washington Post, The Times of India, Wired Magazine, the British newspaper The Guardian, and ABS-CBN News, a major news outlet in the Philippines. Not surprisingly, these sources have many followers, and each of the tweets directly refers to an article published by their newsroom. However, during the study, each of them tweeted only one tweet in which the phrase “code of ethics” appeared. • The three BBC accounts tweeted and retweeted a news story about FIFA president Gianni Infantino being cleared of allegations of violating FIFA’s Code of Ethics (BBC 2016). • Two tweets from ABS-CBN News and the Times of India discuss news from FIFA, specifically, how on December 6, 2016, former FIFA president Sepp Blatter lost his appeal against a six-year ban for ethics violations(Homewood 2016).

198

K. Laas et al.

• Wired tweeted the link of an article that discusses the need for augmented reality games such as the then-newly-released Pokemon Go to adopt a code of ethics (Cross 2016). • The Washington Post had a tweet leading to their article from November 16, 2016, which discusses the lack of an ethics code for Trump’s transition team (Byers 2017). • The Guardian tweeted on August 19, 2016, an article describing a British pro-­ Jeremy Corbyn grassroots organization’s struggle with their interim code of ethics (Syal 2016). (Fig. 11.3) 11.3.2.2  Most Active Users In order to narrow down the category of users with the most tweets about the phrase “code of ethics,” we used the cutoff of users who had more than 100 tweets that were captured by this search, which left 11 users in this category. The most active users tweeting about “code of ethics” generally posted only a few unique tweets”;

Username

Description

Tweet

Followers

BBCBreaking

BBC Breaking News

Fifa president Gianni Infantino cleared over allegations he breached governing body’s code of ethics

30.8M

R.T. @BBCBreaking: Fifa president Gianni Infantino cleared over allegations he breached governing body’s code of ethics

18.8M

Trump gets to decide if his transition team will have a code of ethics

9.64M

R.T. @TOISportsNews: CAS rejects @SeppBlatter appeal, says ex-FIFA leader 'breached code of ethics'

9.57M

Augmented reality games like Pokemon Go need a code of ethics—now:

8.45M

R.T. @BBCBreaking: Fifa president Gianni Infantino cleared over allegations he breached governing body’s code of ethics

8M

Momentum drops pledge to non-violence from code of ethics

6.41M

Football: CAS rejects Blatter appeal, says exFIFA leader ‘breached code of ethics’ I via @AFP

5.17M

(Media Company) BBCWorld

BBC News World (Media Company)

washingtonpost

The Washington Post (Media Company)

timesofindia

The Times of India (Media Company)

WIRED

WIRED (Media Company)

(BBCNews)

BBC News, United Kingdom (Media Company)

guardian

The Guardian (Media Company)

ABSCBNNews

ABS-CBN News (Philippines) (Media Company)

Fig. 11.3  Most influential users for “Code of Ethics”

11  Mentions of Ethics Codes in Social Media: A Twitter Analysis

199

these tweets were repeated by the same account several times, in some cases over three hundred times during the study. Eight of the eleven most active Twitter accounts were managed by private citizens, with the three remaining accounts belonging to a professional ethicist and two media companies. The tweets cover a wide variety of topics, such as talking about a code of ethics in yoga practice, calling for the need for politicians and government officials to follow a code of ethics, blaming journalists for breaking their code of ethics, discussing engineering codes of ethics, or distributing lyrics from a Christian rock band called “Code of Ethics.” Furthermore, there was one Twitter account from a private individual seeking to build support for whistleblowers who have faced retribution from their employers. This account seems to be related to another account run by someone who identifies himself as the CEO of a business ethics institute that also tweeted about a whistleblowing case against the same company mentioned by the account managed by the private individual. Three accounts, including the two media outlets, posted links to news about actual codes of ethics, including a delay in enforcement of a code of ethics for human resource offices, a tweet on the development of a code of ethics for pharmacists in Kenya, and a post about a code of ethics for breeding micro pigs. Overall, most of these most active users had a limited number of followers. One private citizen had slightly over 5000, and the other most active users had between 32 and 1300 followers. While three of the tweets did indeed provide links to a code of ethics or news articles about codes of ethics, there was little to no discussion about these tweets between followers, and these tweets were not widely shared. So, the impact of the code of ethics-related tweets by these most active users is relatively limited. (Fig. 11.4). 11.3.2.3  Most Mentioned Users During the study, 16,928 unique users were mentioned in tweets with the search phrase “code of ethics”. Only around 25 unique users were mentioned over 300 times. The majority of the most mentioned users had many followers and were usually either journalists, public figures, politicians, news outlets, or individuals who ran their own blogs. In the list of tweets including the most mentioned users, the main topic covered is journalism ethics, and the individuals mentioned in these tweets include media companies (2), professional journalists (4), and a politician (POTUS). The tweets surrounding this topic often centered around calling out a journalist or media company as having violated a journalism code of ethics, such as the Society of Professional Journalists’ (SPJ) Code of Ethics or the Radio Television Digital News Association (RTNDA) Code of Ethics. The tweets referred to how journalists should treat one another during interviews, the inaccuracy of news coverage on a television station, the issue of a historically inaccurate movie, a media

200

K. Laas et al.

User Classification

Followe rs

Unique tweets

Top tweets

Private Citizen

32

3

Codes of ethics practice

Private citizen

293

6

Whistleblowing, exposing Five separate tweets corruption energy company. repeated 442/27/25/10/10

Professional ethicist

128

5

Whistleblowing, exposing Individual tweets on corruption in energy company the same topic

(account suspended)

Top tweet counts (tweets repeated # of times) in yoga 3 separate tweets, repeated 360, 352, and 11 times.

297/30/25/9 Private Citizen

5284

17

Call for a need for government Individual tweets on employees to follow the code the same topic of ethics for government 148/51/32/11 service, avoid bribery and conflict of interest

Institution: Media 1395 Company

1

Consequences of a delay in finishing a code of ethics for human resource professionals

Private citizen

349

6

Retweet tweets that blame Individual tweets on journalists for breaking their similar topic code of ethics 111/15/7

Private citizen

213

4

Advertising personal blog post Individual tweets on discussing engineering codes similar topic of ethics. 59/38/34

Private citizen

367

1

Tweet about a song by 122 Christian rock band, “Code of Ethics”

Private citizen

647

8

Five separate tweets asking Individual tweets on why politicians in Canada and same topic the president of the U.S. do not 32/30/21/16/12 have a code of ethics.

Private Citizen

1270

1

Development of a code of 112 ethics for breeding micro pigs

Institution: Pharmaceutical News Corporation

614

1

Development of a code of 100 ethics for pharmacists in Kenya

(Account suspended)

Fig. 11.4  Most active users Tweeting about “Code of Ethics”

139

11  Mentions of Ethics Codes in Social Media: A Twitter Analysis

201

company that changed the title of an article, or criticized media companies when covering the Trump presidency (Schmidt 2017). Wired Magazine appears on this list due to an article discussing the need for a code of ethics for augmented reality games (see most influential users). The other non-media institution that appears in this group of most mentioned users is a Brazilian energy company that appears in a series of tweets from one of the most active users in this study. This former employee uses two accounts to write about his experience blowing the whistle. The ninth most-mentioned user in the list is an individual who included a quote from a musical artist whose song lamented the absence of a code of ethics. Overall, the most mentioned users have a considerable number of followers. Ethical issues in journalism are a prominent topic, with codes of ethics being mentioned or cited as standards for good journalistic behavior. Dominant in the group of most mentioned users are professional journalists, media outlets, or private citizens related to media. Others mention them in tweets that contain the phrase “code of ethics.”

11.3.3  Spikes of Tweets Containing “Code of Ethics” In order to get a better understanding of the discussion of “code of ethics” related topics on Twitter over time, we tracked the number of tweets containing the phrase over the study time (see Fig. 11.5). In particular, we analyzed the five most significant spikes of tweets containing the search phrase “code of ethics,” which captures a surge of tweets around the search phrase (see Fig. 11.6). All five most significant spikes referred to ethical issues in journalism, politics, and media ethics. The first spike in June 2016 concerned a tweet by a blogger advertising a post he wrote in which he criticizes journalists for breaking their code of ethics. It focuses on media coverage and advertising in the Philippines. The second spike in November 2016 concerned two news posts; the first was an article that discussed whether President-elect Trump’s transition team would adopt a code of ethics (Rein and Viebeck 2016). The second news article focused on an exchange between a well-known alt-right blogger and a journalist over his coverage of the election of Donald Trump. The tweet accused the journalist of breaking his media company’s code of ethics. The third spike also concerned the publication of a series of unverified memos alleging Russian operatives had compromising personal and financial information on then-President-elect Trump (Nolte 2017). The tweet about developers having a code of ethics originated from an article posted on a site that provides news and consulting services to technology executives. The fourth spike concerned a newspaper article, the title of which was changed by the editorial staff. The Twitter post cited in the retweeted message was from a

202 User Classification

K. Laas et al.

Followers

Unique tweets

Subject of Tweets

Top tweet counts

Private Citizen- 42.1K Political Blogger

4

Journalism ethics

1556/20/17

ProfessionalJournalist

834K

9

Journalism ethics

1224/4

ProfessionalJournalist

221K

5

Journalism ethics

1223/4

Parody Account

219K

2

Journalism ethics

1226

Professional politician (POTUS)

18M

71

Journalism ethics

901/8

ProfessionalJournalist

1.33M

12

Journalism ethics/government ethics

900/6

InstitutionEnergy Company

37.4K

4

Whistleblowing

712/40/4

ProfessionalJournalist

823

3

Police, code of ethics 679

Private Citizen - 438K Music Enthusiast

1

Music, absence of 580 code

Institution-Media Company

8.95M

9

Augmented reality / 409/108 autonomous cars

35.4M

180

Journalism ethics

Code of Ethics for 11 government service

(WIRED) Institution-Media Company

146/58/56/20 /17

(CNN)

Fig. 11.5  Most mentioned users with “Code of Ethics”. (Note: Only unique users mentioned in tweets and/or retweeted more than 10 times are listed here)

member of a conservative website, which accused the newspaper of breaking its code of ethics because of the change in the article’s title (Schmidt 2017; Evon 2017). The fifth spike in January 2017 concerned the portrayal of a historical figure in a film; it accused the media company involved og violating its code of ethics (Vyas 2017). Overall, the five most significant spikes concerned media ethicsin the context of political disputes and tended to accuse others of breaking their code of ethics. (Fig. 11.7).

11  Mentions of Ethics Codes in Social Media: A Twitter Analysis

203

Fig. 11.6  Code of ethics spikes timeline

Spikes

Date

Top Tweets (paraphrased)

Count of Top Tweets

1

6/25/2016

R.T.: Journalists break the code of ethics

1664

6/25/2016

Journalists break the code of ethics

261

11/16/2016

Will Trump’s transition team have a code of ethics?

315

11/16/2016

Did journalist break media company’s code of ethics while covering Trump campaign during election?

289

1/11/2017

Journalists violate every code of ethics in journalism.

359

1/11/2017

Did Buzzfeed break their code of ethics when releasing potentially inflammatory documents involving Trump and Russia connections?

76

1/11/2017

Should software engineers have a code of ethics?

51

4

3/8/2017

New York Times apparently has no code of ethics.

640

5

1/28/2017

Television company’s code of ethics is worthless.

633

2

3

Fig. 11.7  Code of ethics spikes date and total Tweets

11.4  Discussion In our analysis, “code of ethics” was by far the phrase most often used in ethics codes-related tweets, with “ethics code”, “professional code”, and the plural forms of these phrases having been used considerably less often. In tweets that used the

204

K. Laas et al.

phrase “code of ethics”, the most relevant topics discussed are ethical issues in (political) journalism, politics, and media, and sports. We found minimal discussion on Twitter of ethics codes in science, medicine, entertainment, technology, and engineering. This could be a product of a a relative lack of interest in ethics by users of this platform, or profession-specific discussions could be confined to different communication modes, such as listservs hosted by professional associations and academic societies. We hypothesize that if this study were repeated in 2021, we would find a far greater discussion around ethics codes, especially around the areas of big data and artificial intelligence. A broad spectrum of users tweeted about “code of ethics.” In particular, journalists, media companies, and users with political interests tweeted about the topic “code of ethics” or were mentioned in tweets. According to our study, professional institutions and professional associations in science and technology were not very active tweeters. They did not actively use Twitter to disseminate information on their code of ethics or to raise awareness of new developments regarding their ethics code. Overall, one of the most relevant results of the analysis is the considerable discrepancy between our initial working hypothesis and the actual findings. We had assumed that we would find several ongoing illuminating conversations between professionals from all kinds of fields that would give us some hints on the current role of ethics codes in an evolving technology-relying society. Instead, we found a much more diverse conglomerate of stakeholders partly using ethics codes-related tweets to promulgate their views strategically. This is partially due to the approach our research took – focusing on the most influential, most active, and most mentioned users and the most significant tweet spikes. Especially in the cases of the most influential users, these tweets tended to be about articles from relatively well-­ known publications such as BBC News, the Washington Post, and the Times of India. We cannot exclude that there may have been more detailed conversations about professional codes of ethics happening on the Twitter platform among the over 83,000 tweets with the search phrase “code of ethics” collected during the search period. Given the difficulty of tracking every tweet, our approach did not bring these conversations to the surface. There is undoubtedly a significant amount of ethics-code-related tweeting that we were unable to cover and analyze; our analysis and our claims are only about the relative significance and role among all the tweets collected. Also, it may be that users are discussing ethical issues in their fields, without directly using one of our search terms. When looking at the users, we found that while the most active users (users who tweeted the most) showed us an intriguing mixture of topics being attached to the phrase “code of ethics,” these users tended not to have a very high following and minimal discussion was generated by these tweets. The list of most influential users was entirely made up of news outlets who, not surprisingly, had the most number of followers. The tweets emanating from these accounts always included the term “code of ethics” as part of an article title to which the tweet was linked. These articles either mentioned a scandal (such as the breaking of the FIFA Code of Ethics) or, more interestingly for this paper, discussed an institution’s (such as as the Trump

11  Mentions of Ethics Codes in Social Media: A Twitter Analysis

205

Administration and a U.K. political organization) interactions with a code of ethics (or lack of one). In the case of the Wired article, the article provided a more in-depth look into the role code of ethics might play in shaping a profession and its work (discussed more below). There was minimal discussion about these tweets on the Twitter platform themselves, however. As shown in the tweet spikes, some posts were widely shared, but that was an end to the influence of these posts. In the study results, tweets related to ethical issues in journalism and politics dominated the discussion. This may be due to the intense political debate in the United States around the presidential election and the inauguration of Donald J. Trump as the 45th president of the United States during the study period. At that time, politics and issues related to journalism, media ethics, and adequate covering of the political developments were discussed vigorously in both traditional media and on social media. Overall, our study points towards a broader social role of ethics codes on Twitter that goes beyond the way ethics codes are traditionally seen in the academic literature. In general, the academic literature characterizes ethics codes and guidelines as a method for instilling standards in professions, science, technology, and business. They help establish special standards of conduct in situations where common sense is not adequate, help educate new members of that profession or organization, and provide a framework for settling disputes even among members with considerable experience (Davis 2015). At their best, codes of ethics help institutions and organizations address emerging issues, regulate practice-specific contexts, provide support, and are seen as helpful resources for professional-specific teaching (Davis 1991). Externally, codes of ethics help the public, or individuals outside the group, understand what they may justifiably expect from members of that profession or institution and a method for evaluating the ethical performance of individual members of a professional group (Davis 2015; Frankel 1989). In our study, a handful of the tweets on ethics codes exemplified these more academic approaches to the role ethics codes play. Tweets that fell into these categories saw ethics codes as a standard for good professional behavior (in journalism, politics, etc.), and sometimes were used in an attempt to educate both fellow professionals and the public on the proper standards of conduct to follow in a given situation or context. Some examples of this use of tweets include the aforementioned Wired article from August 2016 that discussed the need for designers working on augmented reality games like Pokemon Go to adopt a code of ethics that seeks to protect gamers’ safety the public alike. A series of tweets from a professional journalist was captured in our study who happens to also be a member of the Society for Professional Journalists’ ethics committee, asking his fellow professionals to remember the SPJ code of Ethics when covering significant news events. Though Wired has a relatively large Twitter following of around eight million users, that particular news article was only retweeted a total of 409 times, and there appeared to be no discussion on Twitter about the article beyond tweeting the link. Of the 14 tweets the SPJ member/professional journalist posted mentioning the SPJ Code of Ethics, only one

206

K. Laas et al.

tweet was retweeted 217 times, and all the others were shared between 49 and 1 times. If discussions about the use of professional codes as defined by the academic literature are occurring, it is either happening beyond the scope of this study or via a different venue than Twitter. The other use of mentioning ethics codes in tweets included pushing one’s agenda, or to promote one’s (political) view or to help give credibility and persuasive power to one’s views. The majority of tweets and retweets captured concerned these purposes. A tweet on journalists breaking the code of ethics was the top re-­ tweeted tweet on June 25, 2016 that originated from a blogger who writes about politics in the Philippines. The other tweet spikes that do not specifically relate to a news article include the following: a question about the historical accuracy of a program released by New Dehli Television Limited (Vyas 2017), the tweet “N.Y. Times lacks a code of ethics,” posted by a member of the conservative web site Judicial Watch, which accused the New York Times of breaking their code of ethics because of the change in an article’s title (Evon 2017), and allegations against Buzzfeed stating “Buzzfeed violates every code of ethics in journalism (paraphrased),” (Byers 2017). A controversy in media ethics sparked each of these tweets. However, these tweets’ focus was not so much on the actual ethical violation (issues of accuracy, truth, and the publication of potentially unverified documents, respectively); the posts and retweets on Twitter focused far more on validating the authors’ political opinions. These tweets are marked by the use of the phrase“code of ethics” as a kind of touchstone of “what is right,” without actually delving into the principles contained within. The Society for Professional Journalists (SPJ) most recently updated its code of ethics in 2014, and the Radio and Television Digital News Association (RTNDA) most recently updated its code in June of 2015 (RTNDA 2015). The 2014 revision of the SPJ code substantially revised its section entitled “Seek Truth and Report It,” and also edited its 12th provision to read, “[Journalists should]…Recognize a special obligation to serve as watchdogs over public affairs and government. “ (SPJ 2014; Slattery 2016). The RTNDA also expanded its section on truth in journalism, retitling its 2000 section “Truth” to “Truth and accuracy above all” and adding a provision that addresses the influence of social media on reporting.2 In both cases, these codes obviously seek to stress the press’s critical role in politics and its overwhelming commitment to truth, accuracy, and transparency in reporting. The question remains, how can these codes – or any professional code – be used to help build and reinforce public trust in that profession? In 2017 Kathleen Bartzen Culver published an article (“Disengaged Ethics”) that reported her interviews with eighteen participants who assisted in updating both the SPJ and RTNDA codes, as well as participants in the Online News Association who launched its “Build Your Own Ethics Code” project in 2014. In these interviews,

2  “‘Trending,’ ‘going viral’ or ‘exploding’” on social media may increase urgency, but these phenomena only heighten the need for strict standards of accuracy.” (RTNDA 2015)

11  Mentions of Ethics Codes in Social Media: A Twitter Analysis

207

participants talked about the perceived stakeholders of the codes and the issue of public participation in journalism (Culver 2017;487–488). While eleven of the eighteen interviewees pointed to the public as one of the main stakeholders in the codes, participants could only identify how their organization tried to involve the public in the revision or development of their codes project, posting the suggested code revision on their public website. Culver reflects on this lack of public engagement: In an era of increasing interaction between journalists and the public and participation in journalism —through such things as serving as sources, reader comments, and crowdsourcing—ethics discussions offer another way to involve the public, but the opportunity was missed here. Code developers failed to understand the impact of a networked age on their work or the opportunity it presented to open the conversation on media ethics to the very people journalism is supposed to serve. In this, they remained insulated and isolated from citizens and thus undercut the legitimacy of the codes they produced. (Culver 2017, 490).

11.5  Conclusion The question arises, why do professionals and professional associations not use Twitter to share more often about ethics code-related topics or engage more with the public on professional ethics issues? Among possible reasons are that: 1) The professionals in question may not use Twitter at all. 2) The professionals in question may find Twitter’s word limit on tweets to confining. They prefer a more flexible medium. 3) They may not want to discuss their ethics code publicly. Referring to a profession’s ethics code may be something that is done in internal conversations happening via in-person meetings, online forums, and other social outlets hosted by a professional association or other institution. There also may be different, more powerful pathways for professionals and professional associations to distribute information on ethics codes and ethics codes-related developments, such as journals, magazines, or emails. Furthermore, ethics codes may not play a considerable role in everyday communication, so that what we see on Twitter may reflect the overall situation. However, as ethics codes often are not well known, social media like Twitter could be a chance to raise public awareness. However, many professionals and professional associations miss the opportunity to raise public awareness of their ethics codes on Twitter and the opportunity to have a public discussion that may lead to their ethics codes having a greater role in society. Beyond the field of media ethics, how the phrase “code of ethics” is represented on Twitter shows that Twitter users are interested in specific topics, that they care about ethics in certain fields, and that they think referring to ethics codes as standards is helpful, even if a more nuanced understanding of ethics codes might be lacking. Ethics codes are considered to shape behavior in these fields. While referring to ethics codes in tweets may make a very theoretical topic such as professional ethics more accessible to the public, it seems that users do not understand what an ethics code is or what it refers to.

208

K. Laas et al.

This study shows that in tweets containing “code of ethics,” Twitter users care about ethical practice in politics, journalism, media, sports, etc. These are fields of public interest, fields, and topics people get aware of in the media. In contrast, ethical issues in other professional fields may not be as well known publicly. Social media may not be the best tool for professionals to potentially engage members of the public about the importance of codes of ethics in shaping a profession and help build trust in their work. Still, perhaps it is one that we ignore at our peril. Acknowledgments  We would like to thank Otto Brown for his valuable support in analyzing the data in the summer of 2018 during his tenure as an intern at the Center for the Study of Ethics in the Professions at the Illinois Institute of Technology. This study was financially supported by the John D. and Catherine T. MacArthur Foundation, Grant No. 15-109237-000-DIS.

References Association of Internet Researchers, Ethics Working Committee. 2012. “Ethical Decision-Making and Internet Research (Version 2.0). https://aoir.org/reports/ethics2.pdf British Broadcasting Cooperation. 2016. “Gianni Infantino: Fifa president cleared in ethics probe. BBC Sport. August 5, 2016. https://www.bbc.com/sport/football/36834094 Accessed 6 January 2020. Byers, Dylan. 2017. “BuzzFeed’s publication of Trump memos draws controversy.” CNN Money. January 11, 2017. https://money.cnn.com/2017/01/10/media/buzzfeed-­trump-­report/ Accessed 23 January 2020. Collins, Kimberley, David Shiftman, and Jenny Rock. 2016. How are scientists using social media in the workplace? PLoS One. https://doi.org/10.1371/journal.pone.0162680. Cross, Katherine. 2016. “Augmented reality games like Pokèmon Go need a code of ethics.” Wired. August 8. https://www.wired.com/2016/08/ethics-­ar-­pokemon-­go/ Accessed 18 January 2020. Culver, Kathleen Bartzen. 2017. Disengaged ethics. Journalism Practice 11 (4): 477–492. https:// doi.org/10.1080/17512786.2015.1121788. Davis, Michael. 1991. Thinking like an engineer: The place of a code of ethics in the practice of a profession. Philosophy and Public Affairs 20 (2): 150–167. ———. 2015. “Codes of Ethics.” In Holbrook J. Britt and Carl Mitcham (eds.) Ethics, Science, Technology and Engineering, 2nd Edition. Farmington Hills, MI: Gale, Cengage Learning. pp. 380–383. Evon, Dan. 2017. “Did the New York times contradict their 20 January 2017 report about wiretapping?” 8 March 2017. Snopes.com https://www.snopes.com/fact-­check/nytimes-­wiretap-­ articles/ Accessed 22 February 2020. Frankel, Mark S. 1989. Professional codes: Why, how, and what impact? Journal of Business Ethics 8: 109–115. Homewood, Brian. 2016. “Former FIFA President Blatter loses appeal against ban.” Reuters. December 6, 2016. https://www.reuters.com/article/us-­soccer-­fifa-­blatter/former-­fifa-­ president-­blatter-­loses-­appeal-­against-­ban-­idUSKBN13U1O9 Accessed 3 March 2020. Information Commissioner’s Office. 2019. “Lawful basis for processing.” Guide to the General Data Protection Regulation (GDPR). https://ico.org.uk/for-­organisations/guide-­to-­data-­ protection/guide-­to-­the-­general-­data-­protection-­regulation-­gdpr/lawful-­basis-­for-­processing/ Accessed 30 March 2021. Mele, V., and D.H. Schepers. 2013. E Pluribus Unum? Legitimacy issues and multi-stakeholder codes of conduct. Journal of Business Ethics 118: 561–576.

11  Mentions of Ethics Codes in Social Media: A Twitter Analysis

209

Michael, Davis. 1997. The moral Authority of Professional Code. NOMOS 29: 302–337. Mohammadi, Ehsan, Mike Thelwall, Mary Kwasny, and Kristi L. Holmes. 2018. Academic information on twitter: A user survey. PLoS One. https://doi.org/10.1371/journal.pone.0197265. Accessed 30 March 2021. Moreno, Megan A., Natalie Goniu, Peter S. Moreno, and Douglas Diekema. 2013. Ethics of social media research: Common concerns and practical considerations. Cyberpsychology, Behavior and Social Networking 00: 1–6. Nolte, John. 2017. “Hours before Trump’s Press Conference CNN and BuzzFeed hit ‘Fake News’ Bottom.” Daily Wire.com. https://www.dailywire.com/news/hours-­trumps-­press-­conference-­ cnn-­and-­buzzfeed-­hit-­john-­nolte. Accessed 7 January 2020. Perrin, Andrew, and Monica Anderson. (2019). “Share of U.S. adults using social media, including Facebook, is mostly unchanged since 2019.” Pew Research Center. (April 10) https://www. pewresearch.org/fact-­tank/2019/04/10/share-­of-­u-­s-­adults-­using-­social-­media-­including-­ facebook-­is-­mostly-­unchanged-­since-­2018/. Accessed 14 January 2020. Radio, Television Digital News Association. (2000). “RTNDA code of ethics.” Ethics Codes Collection. http://ethicscodescollection.org/detail/5c239d3c-­b95e-­4c81-­ac2a-­49873f388a8a Accessed 5 December 2019. ———. (2015). RTNDA code of ethics. https://www.rtdna.org/content/rtdna_code_of_ethics. Access 5 December 2019. Rein, Lisa, and Elise Viebeck. 2016. “Trump gets to decide if his transition team will have a code of ethics.” Washington Post. November 16, 2016. https://www.washingtonpost.com/news/powerpost/wp/2016/11/16/trump-­gets-­to-­decide-­the-­ethics-­rules-­for-­his-­own-­transition/ Accessed 3 February 2020. Rosen, Aliza. 2017. “Tweeting Made Easier.” November 7. https://blog.twitter.com/en_us/topics/ product/2017/tweetingmadeeasier.html Accessed 2 December 2019. Samuel G. W. Amed, H. Kara, C. Jessop, S. Quinton, and S. Sanger. 2018. Is it time to re-evaluate the ethics governance of social media research? Journal of Empirical Research on Human Research Ethics 13(4): 452–454. Schmidt, Michael S, Matthew Rosenberg, Adam Goldman, and Matt Apuzzo. 2017. “Intercepted Russian communications part of an inquiry into Trump associates.” January 19, 2017. https://www.nytimes.com/2017/01/19/us/politics/trump-­russia-­associates-­investigation.html. Accessed 5 January 2020. Slattery, Karen L. 2016. The moral meaning of recent revisions to the SPJ code of ethics. Journal of Media Ethics 31 (1): 2–17. https://doi.org/10.1080/23736992.2015.1116393. Society of Professional Journalists. (2014). “SPJ code of ethics.” https://www.spj.org/ethicscode. asp Accessed 6 January 2020. Syal, Rajeev. 2016. “Momentum drops pledge to nonviolence from code of ethics.” Guardian. August 19. https://www.theguardian.com/politics/2016/aug/19/momentum-­drops-­pledge-­to-­ non-­violence-­from-­code-­of-­ethics Accessed 12 January 2020. Taylor, Joanna, and Claudia Pagliari. 2018. Mining social media data: How are research sponsors and researcher addressing the ethical challenges? Research Ethics. 14 (2): 1–39. Twitter Inc. 2018. “Developer Agreement and Policy” https://developer.twitter.com/en/developer-­ terms/agreement-­and-­policy.html. Accessed 2 January 2020. Vyas, Bhanu Priya. 2017. “Padmavati does not distort history, clarified Deepika Padukone after attack on Bhansali. 28 January 2017. NDTV India News. https://www.ndtv.com/india-­ news/padmawati-­d oes-­n ot-­d istort-­h istory-­c larifies-­d eepika-­p adukone-­a fter-­a ttack-­o n-­ bhansali-­1653620. Accessed 12 January 2020. Kelly Laas  Librarian and Ethics Instructor, Center for the Study of Ethics in the Professions, Illinois Institute of Technology, USA; [email protected]. Her research interests include the history and use of codes of ethics in professional fields, ethics education in STEM, research ethics, and integrating ethics into technical curricula.

210

K. Laas et al.

Elisabeth Hildt Professor of Philosophy and Director, Center for the Study of Ethics in the Professions, Illinois Institute of Technology, USA; [email protected]. Her research focus is on bioethics, ethics of technology, research ethics and Science and Technology Studies. Research interests include research ethics, philosophical and ethical aspects of neuroscience, and artificial intelligence.  

Ying Wu Data Scientist at Zenreach, San Francisco; [email protected]. Ying graduated from Illinois Tech with a Master’s degree in Data Science. Her interests include data science and business intelligence, and leveraging big data and analytics to drive better decisions.  

Chapter 12

The Technology’s Fine; It’s the Code of Professional Ethics That Needs Changing Dennis Cooley

Abstract  Codes of professional ethics can pose problems for the professions and professionals they regulate because codes are inherently imbued with their authors’ principles and values; authors, who generally, and not coincidently, are the profession’s leaders. To be and do what one should in the profession, therefore, is to become a version of the profession’s leaders. A problem, of course, is that the leaders might be ill-equipped to evaluate the morality of their profession’s emerging technologies and innovations. Altering the code’s framework to incorporate additional or different relevant moral or ethical features, especially those emerging from other cultures and stakeholders, can be a challenge. Given that the authors’ values and principles are core parts of their self-identity, to change the code’s values and principles is to question the authors’ self-identity and self-worth, as well as the self-identity and self-worth of the professionals who have embedded themselves in the field. However, practical revision based on moral psychology to simpler, more automatic codes must occur for technology to progress and for professionals and their actions, policies, and technology to become more ethical. Keywords  Moral behavior · Moral psychology · Practical morality · Pragmatism

Frames of reference are inescapable  – researchers inevitably carry with them the biases created by their own past experiences and theories. The best they can do is to be as explicit as possible about what those are. (Dent and Goldberg 1999, 32)

D. Cooley (*) North Dakota State University, Fargo, ND, USA e-mail: [email protected] © Springer Nature Switzerland AG 2022 K. Laas et al. (eds.), Codes of Ethics and Ethical Guidelines, The International Library of Ethics, Law and Technology 23, https://doi.org/10.1007/978-3-030-86201-5_12

211

212

D. Cooley

12.1  Introduction1 A code of professional ethics is an insight into its creators’ moral psychology and personalities. After all, the code’s values and principles, as well as how they are to be interpreted, originated from somewhere. They did not appear upon found tablets; they were created through human intentionality and action. Thus, a code indicates partially how its creators understand what members of their profession should be and how they should act. In fact, codes of professional ethics are their authors’ personal or shared set of beliefs, values, principles, and ideologies successfully generalized to apply to everyone in their profession. If one understands the code, therefore, one can gain an understanding of who the creators are or were as people and identify their ethics and morality,2 and vice versa. Since codes of professional ethics comprise, in part, core values and self-identity, it is difficult to change the former as long as the code’s keepers are also the codes’ authors. If the authors and keepers change, however, then there could be radical revisions, reversals, and even starting anew. It depends on what the incoming authors’ personal or shared set of beliefs, values, principles, and ideologies are, and the incentives for change. Effects on technological innovation also reflect the code creators’ personal identity. The technology they find permissible is made allowable, ipso facto, by the code they created with their values and principles. That which they personally3 deem impermissible is classified as such by the code. Of course, these values and principles can be shared by others, especially when writing a code of professional ethics in consensus with like-minded, social human animals, with cultural values shaped by their social, educational, institutional, professional, and through other socialization. When the code is written by one person, the autobiography is clear. When it is a group construct, it is still a psychological history of what the group decided were the important values and principles at that time. A reasonable person can legitimately wonder whether moral codes work as they are supposed to do. In 79 empirical studies on the efficacy of business codes of ethics, the results were mixed: the researchers found 35% of these codes were effective in positively influencing moral behavior, 33% have no significant relationship, 16% have a weak relationship, and the remaining 14% have mixed results in influencing moral behavior (Kaptein and Schwartz 2007, 113). Given codes’ importance and impact, we should ask if 35% effectiveness is sufficient to justify how the current approach to codes of professional ethics.

 The author thanks Michael Davis and Kelly Laas for their helpful comments.  “Ethics” and “morality” are used interchangeably in this chapter, although they can be distinguished as the study of morality (ethics) and morality itself, as well as the study of what people should be – ethics – and how they should act – morality. 3  Moral psychology studies how we make moral decisions and develop moral character. An individual’s personal values and principles are significantly but not entirely the result of genetics, environment, and socialization. 1 2

12  The Technology’s Fine; It’s the Code of Professional Ethics That Needs Changing

213

Why are these codes not doing the job they were intended to do? I contend that they can be too permissive, restrictive, ambiguous, vague, inconsistent with each other, or based on what is rational rather than what is reasonable.4,5 There are several causes why this is so, but the focus here is on each code’s inherent imbuement of its authors’ view of professional morality, which can make the code less useful than it should be. By simplifying and naturalizing codes of professional ethics based in part on what moral psychology tells us, they can be woven more seamlessly into everyday professional life and interactions. By using the “channels already dug” by genetics and socialization for morality to exist and function, the code will likely be more effective, therefore, in achieving individual, group, and humanity’s shared moral goals as social animals with moral agency. Each code is then refined to fit its profession. This approach to professional ethics is more practical than assuming that moral codes need to catch up to the technology.

12.2  Sexbots An examination of problematic innovations like sexbots will illustrate how codes of professional ethics flexible and practical enough to deal with rapidly developing technology. The codes should help individuals figure out what they should do when the issue arises – without having to wait for revisions to work their way through the professional society’s amendment process. Although morally or epistemically inconceivable6 for many, sexbot technology7 raises interesting questions unique to the product itself and for technology in general. As these products become more intelligent, responsive, and interactive to serve the owner’s desires, for instance, what moral codes should be programmed into them (Bosker 2014)? There are other ethical challenges to human-robot interactions, that map more or less on sexbots, including but not limited to: 1. Therapeutic Robots used with vulnerable populations: Therapy recipients often develop strong psychologically and emotionally important bonds with the robot that makes the patient more emotionally vulnerable to the bond being severed. 2. Lack of diversity in robot morphology and behavior: The robot’s gender, race, and ethnicity may unintentionally be the result of illicit stereotypes, as in the case of over-feminized platforms (Riek and Howard 2014, 2–4).

4  The rational privileges reason over emotion in decision making, acting, and so on, whereas the reasonable makes emotion at least equivalent to reason. One may be non-rational yet still be reasonable, as happens when someone acts primarily out of care for another. 5  Although there will probably be other factors as well. 6  To be morally inconceivable is to be so outside one’s ability to think it is good or right, even on a theoretical basis, whereas something is epistemically inconceivable if it is impossible to imagine. 7  Sex dolls and sexbots have the same physical appearance, but the robots are programmed to mimic communications and other human interactions.

214

D. Cooley

Even if sexbots are merely device-objects, the owner pays a considerable amount of money for them to appear and function as he desires.8 First, for a human-robot sexual act to happen, the human being has to desire the robot. That is, the technology has to work for the owner’s intended use, which is to sexually excite or create an emotional relationship. For an emotional relationship, the sexbot must react in a way that the person involved can believe that the robot has some sort of reciprocal care feelings. Based on sophisticated bodily and oral AI programs, sexbots convincingly mimic reciprocal interaction, which convinces at least some of their owners that the devices value them (Levy 2007). Deborah Blizard (2017) claims that “humans may truly love their erotic dolls in a meaningful way” (114), which she argues is morally permissible, if initially unsettling. Hence, although making robots as unhuman-like as possible may solve therapeutic robots’ ethical issues, it defeats at least one of sexbots’ primary purposes and probably would lower therapeutic robots’ effectiveness (Boltuc 2017; Elder 2017; Meacham and Studley 2017). Although there are rational, nuanced arguments for and against such technology (Levy 2007; Cheok et al. 2017; Danaher 2017), it could very well be that computer scientists’, engineers’, and laypersons’ views on the subject are determined far more by their automatic, emotive thinking process than by their more rational, reflective one (Haidt 2001; Greene 2013). The latter process is more akin to following a code of professional ethics than the former, which uses personal emotional values and principles. Consider the Yuck Argument that some scholars, such as Francis Fukuyama, have employed against human/non-human chimera technology (Fukuyama 2002). Very briefly, this argument is based on an observer’s emotional reaction being taken as adequate evidence that something is wrong or bad. If the technology makes the person disgusted, then that reaction is sufficient proof that some fundamental, natural value or core moral principle was violated. Emotional acceptance or repulsion, however, is an inadequate justification for a moral claim. Jonathan Haidt’s incest example shows that even when all the objective, rational arguments against incest are removed, there is still disgust from the story that then goes in search of a rational explanation to justify it (Haidt 2012). The negative reaction is more likely telling us what our socialization is and what we consider to be normal more than anything about morality, although it also may be based on olfactory reactions to closely related species members (Potts et al. 1994). No objective values or principles are in danger. Therefore, a mere positive or negative reaction does not entail adequate evidence for the action’s or thing’s morality. Moral behavior and ethical being is the result of emotions appropriately modulated by reason, not unbridled emotional responses to stimuli. Sexbots will encounter Yuck Argument’s repulsiveness for some people.9 Sex is often thought of as having an element of moral impurity or corruption: even areas of the body that are merely the result of evolutionary adaptation are considered to  To pay $15,000–$40,000 for this type of product implies it is purchased for its designed features.  There will be individuals who find advantages to the technology, such as prevention of disease transmission, availability of sex, and no psychological impact of the device, whilst still overwhelmingly eschewing the underage sex implication (Scheutz and Arnold 2017; Behrendt 2017). 8 9

12  The Technology’s Fine; It’s the Code of Professional Ethics That Needs Changing

215

be indecent. Of course, this reaction is mostly, if not entirely, a result of learned behavior. That being said, we should realize that the emotional responses tied to automatic reasoning processes could very well influence how codes of professional ethics are interpreted. The current codes as they are written are capable of being used to rationalize an existing view rather than guiding the person to a reasonable decision, as in the sexbot case. That is, when confronted with the technology, a professional might personally interpret a poorly designed code’s vagueness, ambiguity, legalism, conflicts, etc., to justify her existing emotional, intuitive, automatic decision, rather than using the code to come to a reasonable decision. Even if an individual employs a more reflective level of moral thought, the result is still heavily influenced by his personal morality, especially for technology that does not clearly fit the code’s current guidance. The Institute of Electrical and Electronics Engineers’ (IEEE) Code of Conduct (2018), for instance, has general rules not to bribe, discriminate illicitly, and avoid even the appearance of a conflict of interest but says nothing about creating sexual robots. Citing Provisions 1 and 5, public welfare and helping others understand technology, respectively, does not solve the issue. Both are so general that they can be made to support or attack the permissibility of engineering sexbots and other controversial technology. ACM’s (2018) Code of Ethics and Professional Conduct’s seven general ethical principles, nine professional responsibilities, and seven professional leadership principles, include ensuring the possibly idealistic, “the public good is the central concern during all professional computing work” (3.1). Although the ACM (2018) Preamble states the “Code is not an algorithm for solving ethical problems; rather it serves as a basis for ethical decision-making,” it would be difficult, at best, for a professional to find it helpful enough in making an ethical decision whether to create a sexbot. The code neither clearly bans nor encourages the development of such technology. What would be in the public good in this situation?10 Someone seeking guidance would, therefore, have to resort to his personal values and principles to come to a decision (House and Seeman 2010; Giorgini et al. 2015). Hence, it is vital to incorporate moral psychology into a professional code of ethics’ design.

12.3  C  odes of Professional Ethics’ Development and Challenges Moral and ethical codes are essential in every area of social life, including that of professionals. They reflect the values and processes each society uses to govern its individual citizens’, groups’, and institutions’ existence and interactions, partly to

 Similar issues arise for other codes such as Institute for Certification of Computing Professionals (2021), The Institute of Electrical and Electronics Engineers (2018), and the National Academy of Engineering (2006).

10

216

D. Cooley

keep the society and social interactions adequately functioning.11 All codes of professional ethics act in a similar manner. Based on what the profession’s unique features are, all professional codes have relativistic elements. For instance, one profession might require a fiduciary focus on customers/clients, which in turn, selects role-relative values, whilst another concentrates on an animal researcher’s primary concerns, including treating animals morally. Professional organizations’ codes are intended by their creators as specialized social-professional moral codes aimed at making the profession and its members into better professionals behaving morally. Codes of professional ethics are as designed as any other artifact  – including technology. Moreover, all designed objects are what they are because they were created that way, including their composition and relationships, as well as what is not included. Innovations are imbued with their creators’ moral values and principles: “the very first decision we must make in designing…is ethically loaded” (Robison 2017, 58). A simple object, such as a stovetop with four burners and four control nobs, requires a first choice, such as how to arrange the burners or nobs on their respective surfaces. Once decided, some potentialities are eliminated, while remaining possibilities can have their probabilities affected by becoming more likely to be included in the design or less likely. Throughout the creative process, the designer’s values, beliefs, principles, and ideologies are tools used in determining what should be done (Robison 2017, 83–89). At the very least, the designer tries to design the stovetop so that it matches how she – or they, if it is a team – believe consumers will or ought to want it. There are many challenges that can make codes of ethics less effective at motivating moral behavior by more ethical professionals. Among them is complacency based on having a moral code, and then declaring its existence sufficient for moral demands, acting as if morality is a separate activity and existence from professional life and activities, and working up to code’s rule but not its spirit (Boddington 2017; Goldman 1980). The latter is especially worrisome because, for novel technology and situations, the individual might follow the code’s letter and yet not act appropriately. That is, if the code has relevant features to begin with. For each deficiency encountered, the professional code of ethics and morality’s functions are stymied. Consider an example with both moral and epistemic inconceivabilities. The developed world has enormous technology privilege, as shown in part by the unequal distribution of technologies’ benefits, e.g., energy consumption and life expectancy differences (Jasanoff 2016). In comparison to tweaking existing developed world technology, e.g., smartphone upgrades, there is relatively little interest in technology to alleviate developing world issues, such as increasing access to clean water, energy for food preparation, and practical information.12 The developing world’s three billion people living on less than $2/day (Polak and Warwick 2013) are unprofitable; therefore, they are rarely considered as a viable ethical

 Since the focus here is on group and individual as group member actions, I will set aside how moral codes function when fewer than two people are involved. 12  Some technologies deal with these problems, but not to the degree required. 11

12  The Technology’s Fine; It’s the Code of Professional Ethics That Needs Changing

217

option in a capitalistic market if they are considered at all. However, this myopia is a mistake recognized by parts of the professional field, for example, the National Academy of Engineering. In their 2006 report, Engineering and Developing World, the authors state: Considering the problems facing our planet today and the problems expected to arise in the first half of the twenty-first century, the engineering profession must revisit its mindset and adopt a new mission statement – to contribute to the building of a more sustainable, stable, and equitable world.

What the NAE calls attitude, others might well-label as ideology, which is a person’s or group’s fundamental, normative set of values, principles, and beliefs that affect how they think about and interact with the world. Expanding what matters in considering a project to include the soft issues will result, at times, in significant changes in what innovations happen and how the technology is developed. Other professional codes do not require professionals to create technologies that satisfy immediate needs over innovation that merely improve already privileged lives (American Physical Society 2015; ACM 2018). For example, the IEEE’s (2017) 10 rules mention safety, health, public welfare, the requirements to treat people fairly and not to discriminate, but are silent on whether basic needs take priority over wants. This could be considered as a case of moral inconceivability. Alternatively, it might be epistemic inconceivability  – too few think about areas outside the profession’s potential customer base, a base required to be able to compete profitably. There are problems with what the codes contain and how they are written. Many codes develop over time. Not in a systematic way, but rather as the need for addition, deletion, or alteration is perceived by those in charge of them. As novel, unforeseen situations arise, there is a tendency to add process rules on what professionals should do for future morally similar occurrences. If something goes wrong, then great pains are taken to revise the code so that it will not happen again. 1971’s American Society of Mechanical Engineers [ASME] v. Hydrolevel Corp., for example, caused ASME to add a conflict of interest policy.13 The differences between codes for the same profession provide evidence for this patchwork approach and an insight into their creators’ values and principles. Although general morality is universal, specific moral codes are different; e.g., some are far more complex than others, yet the subject matter is normally the same. The problem, of course, is which competing code in the same profession to follow. In addition, regardless of creation or modification dates, tiny lawyers lurk in the writer and user of each professional code. Codes need to be clear, concise, precise, and plausible, but at the same time, professionals know that some scofflaws manipulate rules to justify their actions. Every possible exception scenario is imagined, and then rules are written to address each.14 The Institute for Certification of  I thank Michael Davis for providing this example.  It might be argued that the moral code is backed up by a very involved set of best practices that have been developed from experience.

13 14

218

D. Cooley

Computing Professionals (ICCP), for example, has two full pages of rules with four sections broken into eight to nine subsections each. One imperative is that a professional, “…must not engage in any conduct or commit any act which is a discredit to the reputation or integrity of the information processing profession” (ICCP). But what does that rule mean? Does it allow programming sexbots, even if they resemble children? What of justified whistleblowing, even though it will damage reputation and integrity? What if the professional act is wrong but does not damage the information processing profession, perhaps because the person is not caught? Imagining what could be is necessary for morality’s existence, but it can also be a practical drawback when addressing each possibility becomes part of the code. Although well-intentioned, adding more and more detail to professional codes make them unwieldy for regular use (Schwartz 2002, 32). The rules tend to be legalistically complex in a way that do not engage the emotional interests and reason necessary to motivate a person to be good and act ethically. This is especially the case if an average professional cannot understand what is required of her (Sull and Eisenhardt 2015; Schwartz 2002, 31). The more a code applies to an abstract reasoning process without simultaneously stimulating emotional values, which a legalistic code is bound to do, the less likely it is to do its intended job. For example, there is framework fading. A person can view a situation from a moral framework. When other considerations are added to the reasoning process, the person’s moral framework is replaced by another, such as one focused on capitalistic ideals, and all done without the person noticing the “ethical fading” (Tenbrunsel and Messick 2004). In addition, when people vote against their best interests, their emotional values cause their vote rather than rational analysis to find what works best for the individual or the society (Lodge and Taber 2013) In both situations, reason does not play the dominant role it is assumed to have. Simpler codes based on universal emotional values that exist as part of automatic reasoning processes, therefore, would more likely have a far greater effect on behavior – if the work on moral psychology and habitual behavior is correct (Rothman et  al. 2015; Wood and Runger 2016; Wood 2017). Long codes also face the challenge that the more rules they have, the more likely competing values arise, especially in complex situations in which different values and principles are at odds. For example, engineers are prohibited from disclosing confidential business information unless they have adequate consent. However, at the same time, they “shall at all times strive to serve the public interest” (NPSE Code of Conduct, as cited in van de Poel and Royakkers 2011, 39–40). It then becomes an issue of which imperative should prevail in the situation, and the involved agents will make that choice, and possibly, based upon their personal biases. In the example above, serving the public interest could be redefined in a manner the code’s authors never intended, but which the individual engineer can rationalize using the code itself. This brings us to how a moral code’s function should be understood (van de Poel and Royakkers 2011). One option is that professional codes are aspirational, and

12  The Technology’s Fine; It’s the Code of Professional Ethics That Needs Changing

219

advisory codes are merely guidelines, rather than strict, prescriptive rules (ibid.).15 There are two drawbacks to this approach. First, if codes remain as guidelines, then bias becomes almost inevitable. Interpreting rules depends on the interpreter’s personal belief, value, and principle system; hence, the interpretations often tell us more about the interpreter than they do about objective, moral reality. To avoid personal bias, the guidelines might then be treated as strict prescriptions regardless of extenuating circumstances or need for flexibility in contexts unforeseen by the code’s authors. The second drawback is the inadequately shared understanding of the code. Professionals might not know the actual code to the degree necessary to make it useful. Some have read it but no longer remember it sufficiently (Giorgini et  al. 2015; House and Seeman 2010; Schwartz 2001). Others have not read it or merely skimmed through it (ibid.). In addition, although our communication attempts might appear to the communicant as an unambiguous exchange of information, at times, information transfer fails when the recipient does not grasp the message’s content (Gorovitz 1988). Even carefully written rules face communication barriers.16 Vagueness and ambiguity, for example, often exist because of unclear, broad language, or mental differences. Writers and readers may have different sets of values, weigh them contrarily, or have inadequate shared background experiences, knowledge, definitions, critical reasoning processes, or other communication structures and bases to share interpretations. Some societies emphasize social interests over individual rights/autonomy, whereas others flip the valuation. There are dissimilarities between generations or other groups within a society. Unless conditions are optimal – two or more minds with basically the same mental states and faculties in an ideal environment – communication mistakes happen. To make the code perform as intended, i.e., make professionals better people who produce ethical technology in an ethical way, then it must be pragmatic. That is, it should inform and motivate the person at every step of the way by making her part of the moral community while obtaining an acceptable answer in a sufficient number of cases. The codes must be simple enough for constant, meaningful use, built on the professionals’ emotional interests to provide the motivation to be moral, ethical, and professional, as well as be explanatory tools to justify actions or products to others so that they understand the professional is a reasonable, moral and ethical professional acting as she should.17

 Schwartz (2001) argues that there are eight metaphors used to designate what the code is and how it should be used. 16  Vague, broad language in codes make it less likely that misconduct is condoned and moral behavior is shaped (Popescu 2016, 128). 17  The empirical test will then become an assessment between which approach works better to achieve the outcome of flourishing or some other reasonable end, especially in situations in which dire consequences, such as severe harm to persons, sentient beings, or the environment, is likely. 15

220

D. Cooley

12.4  Moral Psychology: How Is Morality Possible? Since moral codes must be useful in making general and profession-centric moral decisions, as well as identifying when a situation is too complicated to solve without a professional ethicist’s assistance, they should integrate the existing ethical foundations of homo sapiens’ moral, psychological platform: Whether an audience accepts an account depends on the shared background expectations and understandings of the interactants. Accounts that appeal to what “everyone knows” have a higher likelihood of being accepted. (Ford et al. 2008, 364)

The goal is to pragmatically “restructure our knowledge by repurposing its constituent concepts,” instead of attempting to replace how we think about what we should do and what we should be (Shtulman 2017, 246). At the very least, the codes must be built upon homo sapiens’ shared evolutionary adaptations and social learning (Hauser 2006). The codes might even be geared to what makes people comply with them: personal values, fear of discipline, and a feeling of loyalty (Schwartz 2001). What universals make human morality possible? Let us begin with what we are not. Reason’s role is more limited than some might suspect (Ross 2014; Sapolsky 2018). Objective observations and other evidence do not have the cognitive force they would if rational person models of ethics were true. Instead, data provide “good evidence” if they confirm what we already believe, and are discounted, if not ignored, if they go against our motives, hopes, fears, or desires (Sharot 2017). Making morality even less rational, emotional processes arrive first at a decision, and then our minds craft a rational justification for our existing choice (Haidt 2001). Joshua Greene states that our brains function like dual-mode cameras. Most of our thinking, including decision making, is governed by the emotion-based “automatic” mode comprised of efficient, automated programs created and developed by evolution, culture, and personal experience (Greene 2013).18 That means the desire to do as one should is generally prior to the explicitly conscious recognition of duty. Hence, finding rational justification for what is already believed or decided. Our camera minds are products of nature and nurture. The former have been developed in part, through evolutionary adaptation (Kahneman 2011; Tetlock 2006; Maner 2017). Humans are also social animals, which requires particular general, universal values and principles of interaction, such as keeping one’s promises in sufficient ways to build promise keeping’s required trust and the institution itself (Greene 2013). Although the social values and principles are the same, regardless of which society we are in, variation arises through how society uses or ranks them (Marino 2015). The variation between personal values and principles, which are also significantly influenced by social values and principles, will also exist without the benefit or burden of being accepted by the social whole.

 The other mode is governed by reason. When the automatic mode is unable to deal with a situation, this deliberative, flexible, and controlled mode considers the big picture (Greene 2013).

18

12  The Technology’s Fine; It’s the Code of Professional Ethics That Needs Changing

221

Our moral psychology can be seen in a variety of shared human traits. Moral agents need to think that they have power and value within their sphere, for example. That is, no one wants to believe that she is so vulnerable that she is at the mercy of another’s whims.19 To thrive, each agent has to have sufficient self-esteem, which is a form of perceived invulnerability and valuation. By believing we function well within our circumstances, we build our self-worth and positive self-concept or identity, which in turn makes us feel empowered (Thomas 1980). To flourish in and out of the professional technology world, everyone desires to be personally powerful, in the sense of being able to effectively compete and collaborate in the socio-biotic community (Firestone and Catlett 2009).20 By effectively achieving our goals, and being cognizant of that fact, our self-worth grows. Internal and external acknowledgments of that force increase self-esteem based on enhanced self-perceptions of our power and its effects.21 Additionally, thriving as a professional requires that we become valued community members. Operating successfully within the professional community is enabled by our constant indoctrination into our surrounding societies and reinforcement of socially acceptable or unacceptable beliefs and behavior throughout our lives (Giorgini et al. 2015). Herd mentality is our natural tendency,22 which heavily influences how we behave: “the probability of any individual adopting it [increases] with the proportion who have already done so” (Colman 2003, 77).23 Our desire to imitate successful others can subvert our individual rationality (Bikhchandani et  al. 1998; Tilly 2002). That is, some people are willing to reject what their reason tells them (Asch 1956) so that they can become or remain a herd member. Therefore, if enough people act in a specific manner, those coming into the situation are far more likely to perform in the same way in order to fit the social structure and conventions. If we do well, then we do well relative to the social groups in which we live. Only by making their values and principles ours and then working diligently to preserve them do we remain an esteemed part of our desired herd. All professions consciously and unconsciously attempt to inculcate the profession’s culture in those who are entering it. Although this can be beneficial, especially if it leads to flourishing, it can also pose a serious challenge to being an ethical  Jonathan Haidt (2012) says something much stronger when he claims that human nature is mostly selfish with an overlay of “groupishness” that promotes competition and collaboration (191). 20  Firestone and Catlett call this “personal power,” which is based on strength, confidence, and competence. 21  There are a number of biases linked with the need for self-esteem and being valued. Among them are the cognitive bias of attribution error in which successes are deemed the result of our own activities, whereas failures are blamed on bad luck or others acting inappropriately against us. Fear of failure is the loss of stature, embarrassment, and loss of respect that prevents change threatening one’s self-esteem from happening (Ford and Ford 2010). 22  See Asch 1951, 1955, 1956. 23  Kravetz (2017) calls it “hive mentality.” He argues that hive mentality becomes stronger or weaker depending on how strongly the hive members mirror – through body movement, communications, attitudes, and countenance – each other (10). 19

222

D. Cooley

person. One problem is herd mentality’s ability to warp a person’s evaluation of evidence. Another is that our mind tries to preserve self-esteem by self-justifying our wrong decisions rather than admitting we were wrong, and therefore, is less able to function well within the system (Ross 2014). A practical moral code’s object is to incorporate how morality is possible for us with what we ought to be and how we ought to act. This will make it an effective code for existing and potential technologies.

12.5  A Pragmatic Professional Code of Ethics Pragmatic ethics24 requires drafting moral codes that incorporate and manage our universal, general, and individual irrationalities/non-rationalities if they cannot be eliminated by exposing implicit biases or using some other practical method (Sapolsky 2018). Codes not appealing to these foundations will lack sufficient motivating factors and benefit for adoption. To better achieve the code’s desired results, therefore, it is better to work with the shared moral features instead of trying to overcome or remove existing values and processes and then replace them with intellectually appealing but emotionally dead regulation (Sharot 2017). Efficient, practical codes of ethics use only a few key principles with clearly explicated fundamental values. If these values mirror both the individual’s and her moral community’s justified emotional interests, etc., then the code provides greater motivation and ease of use. The most effective codes build on universal moral cares, such as those described by Schwartz (2005), Haidt and Joseph (2007), and Curry et  al. (2019). Engineering and other codes of professional ethics would custom design their standards from these more general building blocks to work for their specializations’ traits and needs. Any practical code’s moral value standards would be simple, easy to use, and include understandable rules based on those values. In other words, the code would be designed using moral psychology as an essential component. The following rules are not ideal, but they offer some indication of what these simple principles based on moral psychology would look like: 1 . First and foremost, always be a good person, and then be a good professional, 2. Do not cause unnecessary harm, 3. Do not disrespect any living thing affected by your actions, and 4. Practice good stewardship of your living world and your profession.

 Pragmatic ethics is defined as setting a reasonable outcome, in this case flourishing lives, and then finding useful, practical ways to achieve it. The end need not be flourishing lives or any absolute, and in fact may be any goal that is reasonable, as determined by what people as social human animals are and how they interact with each other and their (natural and built) environments. Moreover, ways to achieve an outcome are evaluated by how well they work to that end. Those that work best are to be preferred, but good enough is all that is morally required.

24

12  The Technology’s Fine; It’s the Code of Professional Ethics That Needs Changing

223

Standards that are part of the human psychology of how morality works in society “will allow individuals to readily retain ethical guideline information…these individuals will be better equipped to handle situations calling for ethical decision making” (Giorgini et al. 2015, 133). A practical moral code like this may be more effective because it incorporates the personal values, fear of discipline, and a feeling of loyalty that has been found to motivate ethical behavior (Schwartz 2001). If these standards and rules are common to and expected by the profession’s herd, and associated with prestige or dominance that makes leaders in the profession (Maner 2017), then the fear of being ostracized from the herd satisfies the discipline fear, whilst loyalty and safety arise from being part of the herd. Pragmatic codes incorporating moral psychology have many benefits, but I will point out three. First, they eliminate some of the prohibition conflicts generated by longer, more legalistic, ethical codes, e.g., disclosing confidential business information versus serving the public’s interest. In the above example, many different actions can satisfy all requirements, especially since there is no impractical requirement to maximize any one value, nor is any principle absolute. The building blocks are the same for all decisions, but the former can be applied in a variety of permissible ways, including ranking alternatives for the situation at hand. It then becomes a question of which permissible result from the set satisfies the individual’s or group’s preferences and shared morality. One might say that the simpler, more automatic the map, the better if it is added that the map is partially constructed from moral psychology, modified to fit the profession. In addition, ethics compliance, ethics training literature, and the academic literature consistently claim that an effective code requires those it governs to participate in writing it (Schwartz 2001, 2004). Since constant rewrites and broad participation are impractical, simplification based on the shared morality platform makes the code more likely to be adopted and used by those employing it. With a more automatic code, professionals do not have to engage in deep thought exercises to figure out all interpretations so that they can discover the only correct one. They do not have to appeal always to complex decision processes, sometimes based on abstract ideas. There is no alienation caused by legalistic codes that might appeal to rational interests but do not tie into the emotional fundamentals all people have as moral community members. They see what is in it for them, and they are motivated by their shared and understood values. Since the pragmatic code makes that link, people will believe that their moral goals are obtainable as well as incorporating their existing ethics and morality, and therefore will be motivated to pursue that morality (Sharot 2017). Finally, the pragmatic codes recognize that those who are in technology should not be expected to be professional ethicists for extremely complex cases. The latter understand the ethical nuances better, as well as how to think about the issue. The technological professionals merely need to recognize that their use of the moral standards and principles does not fully answer the question of what to do or to be in the situation, and then know that professional help is required. After all, we do not expect professional ethicists to be professional engineers, so why should we demand the reverse?

224

D. Cooley

12.6  Conclusion Professional morality, and morality in general, it is contended here, is like a language. Native speakers not only know their language’s words and grammar, but they also know the nuances within it for more effective communications, items that those illiterate in that means of communication would miss. Therefore, in general, communicating in a person’s native tongue by someone who also has that native language will be most effective; it works with what already exists and is part of the individual’s automatic thinking process (Greene 2013). For morality, if moral psychology is correct that there are natural and learned values, principles, etc., making up the vast majority of human beings’ moral platform, then those should be used rather than creating a theoretically interesting, consistent code that goes against that shared base. The former is a “language” that is natural, whereas the second is one acquired later in life as a professional. With habituation, the latter could become an automatic thinking process but will unlikely ever gain the same power and ease of use as the one based on genetics and social learning from the earliest age. Effective ethics training throughout a professional career, of course, will help alleviate this problem, as long as it centers on developing automatic reasoning processes. A well-intentioned but misguided approach has been taken when it comes to thinking about the morality of emerging technologies. We do not need to develop more detailed guidelines and rules for codes of professional ethics. To do it before the technology exists is merely to engage in a guessing game as to what the technology will be. At times, there will be a higher probability the moral code can handle the actual results. However, if the technology is truly innovative, then by definition, it will pose challenges that few will see. If the rules are written after the technology is developed, then, at best, this is a catch-up game prone to second-guessing and “what if’s?” that tend to create overly legalistic codes, which stifle innovation and morality, rather than nurturing them. Although counter-intuitive for those who live by carefully ordering or creating technology to control nature, the most useful approach is effective simplification. If we build codes of professional ethics on the simple foundation of values and principles we as social animals share, incorporated in a way to fit each profession and nurture these in professionals, then wrongful actions and bad character should decrease. Such a code will not always be successful in stopping those truly bent upon being evil or doing wrong, but no code – written or not – has ever been able to perform that miracle.

12  The Technology’s Fine; It’s the Code of Professional Ethics That Needs Changing

225

References American Physical Society. 2015. 15.1. Statement on Civic Engagement. Accessed 29 Oct 2019. https://www.aps.org/policy/statements/15_1.cfm. Asch, Solomon E. 1951. Effects of Group Pressure on the Modification and Distortion of Judgments. In Groups, leadership and men, ed. H. Guetzkow, 177–190. Pittsburgh: Carnegie Press. ———. 1955. Opinions and Social Pressure. Scientific American 193: 31–35. ———. 1956. Studies of Independence and Conformity: A Minority of One Against a Unanimous Majority. Psychological Monographs 709: 1–70. Association for Computing Machinery. 2018. ACM Code of Ethics and Professional Conduct. Accessed 27 Oct 2019. https://ethics.acm.org/code-­of-­ethics/. Behrendt, Marc. 2017. Reflections on Moral Challenges Posed by a Therapeutic Childlike Sexbot. In Love and Sex with Robots, ed. Adrian David Cheok and David Levy, 128–137. Cham: Springer. Bikhchandani, Sushil, David Hirshleifer, and Ivo Welch. 1998. Learning from the Behavior of Others. Journal of Economic Perspectives 12: 151–170. Blizard, Deborah. 2017. The Next Evolution: The Constitutive Human-Doll Relationship as Companion Species. In Love and Sex with Robots, ed. Adrian David Cheok and David Levy, 114–127. Cham: Springer. Boddington, Paula. 2017. Towards a Code of Ethics for Artificial Intelligence. New York: Oxford University Press. Boltuc, Piotr. 2017. Church-Turing Lovers. In Robot Ethics 2.0: From Autonomous Cars to Artificial Intelligence, ed. Patrick Lin, Ryan Jenkins, and Keith Abney, 214–228. Oxford: Oxford University Press. Bosker, Bianca. 2014. Google’s New A.I. Ethics Board Might Save Humanity from Extinction. Huffington Post. https://www.huffingtonpost.com/2014/01/29/google-­ai_n_4683343.html. Accessed 26 Nov 2018. Cheok, Adrian David, Kasun Karunanayaka, and Emma Yann Zhang. 2017. Human-Robot Love and Sex Relationships. In Robot Ethics 2.0: From Autonomous Cars to Artificial Intelligence, ed. Patrick Lin, Ryan Jenkins, and Keith Abney, 193–213. Oxford: Oxford University Press. Colman, Andrew. 2003. Oxford Dictionary of Psychology. New York: Oxford University Press. Curry, Olver Scott, Daniel Austin Mullins, and Harvey Whitehouse. 2019. Is It Good to Cooperate? Testing the Theory of Morality-as-Cooperation in 60 Societies. Current Anthropology 60 (1): 47–69. Danaher, John. 2017. The Symbolic-Consequences Argument in the Sex Robot Debate. In Robot Sex: Social and Ethical Implications, ed. John Danaher and Neil McArthur, 103–131. Cambridge: MIT Press. Dent, Eric B., and Susan Galloway Goldberg. 1999. Challenging Resistance to Change. The Journal of Applied Behavioral Science 35 (1): 25–41. Elder, Alexis. 2017. Robot Friends for Autistic Children: Monopoly Money or Counterfeit Currency? In Robot Ethics 2.0: From Autonomous Cars to Artificial Intelligence, ed. Patrick Lin, Ryan Jenkins, and Keith Abney, 113–126. Oxford: Oxford University Press. Firestone, Robert, and Joyce Catlett. 2009. The Ethics of Interpersonal Relationships. London: Karnac Books Limited. Ford, Jeffrey D., and Laurie W. Ford. 2010. Stop Blaming Resistance to Change and Start Using It. Organizational Dynamics 39 (1): 24–36. Ford, Jeffrey D., Laurie W. Ford, and Angelo D’Ameilio. 2008. Resistance to Change: The Rest of the Story. Academy of Management Review 332: 362–377. Fukuyama, F. 2002. Our Posthuman Future: Consequences of the Biotechnology Revolution. New York: Farrar, Strauss, and Giroux. Giorgini, Vincent, et al. 2015. Researcher Perceptions of Ethical Guidelines and Codes of Conduct. Accountability in Research 22 (3): 123–138.

226

D. Cooley

Goldman, Arthur H. 1980. The Moral Foundation of Professional Ethics. Totowa: Rowman & Littlefield Publishers. Gorovitz, Samuel. 1988. Informed Consent and Patient Autonomy. In Ethical Issues in Professional Life, ed. Joan C. Callahan, 182–188. Oxford: Oxford University Press. Greene, Joshua. 2013. Moral Tribes: Emotion, Reason, and the Gap Between Us and Them. New York: The Penguin Press. Haidt, Jonathan. 2001. The Emotional Dog and Its Rational Tail: A Social Intuitionist Approach to Moral Judgment. Psychological Review 108: 814–834. ———. 2012. The Righteous Mind: Why Good People Are Divided by Politics and Religion. New York: Random House. Haidt, Jonathan, and Craig Joseph. 2007. The Moral Mind: How Five Sets of Innate Intuitions Guide the Development of Many Culture-Specific Virtues, and Perhaps Even Modules. In The Innate Mind, ed. Peter Carruthers, Stephen Lawrence, and Stephen Stitch, vol. 3, 367–391. New York: Oxford University Press. Hauser, Mark. 2006. Moral Minds: How Nature Designed Our Universal Sense of Right and Wrong. New York: HarperCollins Publishers. House, Mark C., and Jeffrey Seeman. 2010. Credit and Authorship Practices: Educational and Environmental Influences. Accountability in Research 17: 223–256. IEEE. 2017. IEEE Code of Ethics. https://www.ieee.org/about/corporate/governance/p7-­8.html. Accessed 26 Nov 2019 Institute for Certification of Computing Professionals. 2021. ICCP Code of Ethics. https://www. iccp.org/uploads/8/1/2/9/81293176/iccp_code_of_ethics.pdf. Accessed 26 Nov 2018. Jasanoff, Sheila. 2016. The Ethics of Invention: Technology and the Human Future. New York: W.W. Norton & Company. Kahneman, Daniel. 2011. Thinking, Fast and Slow. New York: Farrar, Straus and Giroux. Kaptein, Muel, and Mark S.  Schwartz. 2007. The Effectiveness of Business Codes: A Critical Examination of Existing Studies and the Development of an Integrated Research Model. Journal of Business Ethics 77: 111–127. Kravetz, Lee Daniel. 2017. Strange Contagion: Inside the Surprising Science of Infectious Behaviors and Viral Emotions and What They Tell Us About Ourselves. New York: Harper Wave. Levy, David. 2007. Love and Sex with Robots: The Evolution of Human-Robot Relationships. New York: Harper Collins. Lodge, Milton, and Charles S.  Taber. 2013. The Rationalizing Voter. Cambridge: Cambridge University Press. Maner, Jon K. 2017. Dominance and Prestige: A Tale of Two Hierarchies. Current Directions in Psychological Science 266: 526–531. Marino, Patricia. 2015. Moral Reasoning in a Pluralistic World. Quebec: McGill-Queen’s University Press. Meacham, Darian, and Matthew Studley. 2017. Could a Robot Care? It’s All in the Movement. In Robot Ethics 2.0: From Autonomous Cars to Artificial Intelligence, ed. Patrick Lin, Ryan Jenkins, and Keith Abney, 97–112. Oxford: Oxford University Press. National Academy of Engineering. 2006. Engineering for the Developing World. http://www.engineeringchallenges.org/14373/nextsteps/7356.aspx. Accessed 9 Oct 2019. Polak, Paul, and Mal Warwick. 2013. The Business Solution to Poverty. San Francisco: Berrett-­ Koehler Publishers, Inc. Popescu, Ada-Iuliana. 2016. In Brief: Pros and Cons of Corporate Codes of Conduct. Journal of Public Administration, Finance, and Law 9: 125–130. Potts, W., J.  Manning, E.  Wakeland, and A.  Hughes. 1994. The Role of Infectious Disease, Inbreeding and Mating Preferences in Maintaining MHC Genetic Diversity: An experimental Test. Philosophical Transactions of the Royal Society of London B: Biological Sciences 346 (1317): 369–378. https://doi.org/10.1098/rstb.1994.0154. PMID 7708831. Riek, Laurel D., and Don Howard. 2014. A Code of Ethics for the Human-Robot Interaction Profession. Proceedings of We Robot. http://robots.law.miami.edu/2014/wp-­content/

12  The Technology’s Fine; It’s the Code of Professional Ethics That Needs Changing

227

uploads/2014/03/a-­code-­of-­ethics-­for-­the-­human-­robot-­interaction-­profession-­riek-­howard. pdf. Accessed 26 Nov 2018. Robison, Wade L. 2017. Ethics Within Engineering: An Introduction. London: Bloomsbury Academic. Ross, Howard J. 2014. Everyday Bias. London: Rowman & Littlefield. Rothman, Alexander J., et al. 2015. Hale and Hearty Policies: How Psychological Science Can Create and Maintain Healthy Habits. Perspectives in Psychological Science 10 (6): 701–705. Sapolsky, Robert M. 2018. Behave: The Biology of Humans at Our Best and Worst. London: Penguin Books. Scheutz, Matthias, and Thomas Arnold. 2017. Intimacy, Bonding, and Sex Robots: Examining Empirical Results and Exploring Ethical Ramifications. In Robot Sex: Social and Ethical Implications, ed. John Danaher and Neil McArthur, 247–260. Cambridge: MIT Press. Schwartz, Mark. 2001. The Nature of the Relationship Between Corporate Codes of Ethics and Behavior. Journal of Business Ethics 32: 247–262. ———. 2002. A Code of Ethics for Corporate Code of Ethics. Journal of Business Ethics 41: 27–43. ———. 2004. Effective Corporate Codes of Ethics: Perceptions of Code Users. Journal of Business Ethics 55: 323–343. ———. 2005. Universal Moral Values for Corporate Codes of Ethics. Journal of Business Ethics 59: 27–44. Sharot, Tali. 2017. The Influential Mind: What the Brain Reveals About Our Power to Change Others. New York: Henry Holt and Co. Shtulman, Andrew. 2017. Scienceblind: Why Our Intuitive Theories About the World Are Often Wrong. New York: Basic Books. Sull, Donald, and Kathleen M. Eisenhardt. 2015. Simple Rules: How to Thrive in a Complex World. New York: Houghton Mifflin Harcourt Publishing Company. Tenbrunsel, Ann E., and David M. Messick. 2004. Ethical Fading: The Role of Self-Deception in Unethical Behavior. Social Justice Research 172: 223–236. Tetlock, Philip E. 2006. Expert Political Judgment: How Good Is It? How Can We Know? Princeton: Princeton University Press. The Institute of Electrical and Electronics Engineers. 2018. IEEE Code of Ethics. https://www. ieee.org/about/corporate/governance/p7-­8.html. Accessed 26 Nov 2016. Thomas, Laurence. 1980. Sexism and Racism: Some Conceptual Differences. Ethics 90 (2): 239–250. Tilly, Charles. 2002. Stories, Identities, and Political Change. Lanham: Rowman & Littlefield. Van de Poel, Ibo, and Lamber Royakkers. 2011. Ethics, Technology, and Engineering: An Introduction. Chichester: Wiley. Wood, Wendy. 2017. Habit in Personality and Social Psychology. Personality and Social Psychology Review 2 (14): 389–403. Wood, Wendy, and Dennis Runger. 2016. Psychology of Habit. Annual Review of Psychology. 67: 289–314. Dennis Cooley  Professor of Philosophy and Ethics and Director, North Dakota State University and the Northern Plains Ethics Institute. [email protected] Dennis Cooley’s research includes publications in bioethics and biotechnology, death and dying, and business ethics. Particular research interests include transgenic organisms, bioresearch and technology affecting vulnerable communities, and developing a socio-biotic community pragmatism for human interactions within their environments

Chapter 13

On Leaving and Receiving Traces: Thoughts on an Un-professional Code of Ethics Adam Briggle

Abstract  Not all ethics codes are meant for professionals. A case in point is the “leave no trace” ethic that governs the behavior of visitors to many of the U.S. National Parks. The basic principle is to have little or no impact on the land, including animals, ecosystems, and geological formations. I argue that the ethic is necessary but also incomplete – it is only one side of the coin. After all, if the ideal is exhausted by the commandment “thou shalt have no impact,” then the most ethical thing to do is to stay home. What is left unspoken is the idea that the visitor is supposed to take or receive a trace. He or she should be transformed, elevated, or improved in some way by the experience of visiting the National Parks. Here, I consider a “receive a trace” ethic and its prospects in our contemporary high-tech society. This is a preliminary sketch that draws from resources in environmental philosophy and the philosophy of technology. It is based on my own experiences and foregrounds the role of narrative because an ethic of receiving a trace is ultimately about meaning and purpose, and stories are how humans do sense-making. My goal is to survey the terrain such an ethic might encompass rather than to develop a systemic or comprehensive code of ethics. Keywords  Environmental ethics · Leave no trace · National parks

13.1  Introduction In the spring and summer of 2017, I took a sabbatical and set out with my children, Max and Lulu, to see America’s National Parks and Monuments. We visited 18 in total, all in the west – from Organ Pipe Cactus to Yellowstone and from Carlsbad Caverns to the Channel Islands. We pulled our pop-up camper behind our minivan. We hiked, camped, peed outdoors, ate beans, and bathed infrequently in cold water. A. Briggle (*) University of North Texas, Denton, TX, USA e-mail: [email protected] © Springer Nature Switzerland AG 2022 K. Laas et al. (eds.), Codes of Ethics and Ethical Guidelines, The International Library of Ethics, Law and Technology 23, https://doi.org/10.1007/978-3-030-86201-5_13

229

230

A. Briggle

Once, we watched a moose watching us at the headwaters of the Colorado River. Each morning, early, I wrote about our adventures and my thoughts in a purple spiral journal while sipping strong camp coffee on a camp chair in the moon shadows of Saguaro or Pinyon. In keeping with our un-professional theme, that journal serves as the primary resource for this essay. At each park, Max and Lulu participated in the Junior Ranger Program, which entails filling out a booklet of questions and activities related to the park’s history, ecology, and geology. Upon completion of the tasks, they reported to a Park Ranger and received a badge or patch as an official Junior Ranger. They would hold up their right hands and take an oath. The oath was different for each place and each Ranger, but the spirit was the same: it was a solemn promise to care for the land. It’s a hard promise to keep, especially as people flock to the National Parks. In the 1950s, when Ed Abbey was already lamenting “industrial tourists,” his beloved Arches National Park received about 50,000 annual visitors. According to the National Park Service records on visitor statistics, that tally is now 1.5 million (National Park Service 2019). Total annual “recreational visitors” to all National Parks in his time was around 70 million. Now it is over 318 million. From 1855 to 1864, the first decade of its national fame, Yosemite only received 653 tourists. In 2016 alone, the number had increased to 5.2 million – that’s the first decade’s worth of visitors every hour. We are more numerous, wealthier, and have nicer cars, campers, and roads. It’s a damn shame. Allow me to do the National Park Service a favor and tell you that all those places are ugly. Terrible. You don’t want to go there. Believe me. Stay away. Any honest account of the National Parks today has to start here with a sober reckoning of our historical moment. From its origin in 1916, the National Park Service was saddled with a difficult dual mission if not an outright unworkable contradiction. It was to conserve natural beauty and wilderness and “provide for the enjoyment of the same in such manner and by such means as will leave them unimpaired for the enjoyment of future generations.”1 Sure, when we were poorer, and our numbers were smaller, this was less of a problem. And still, today, if you go to the right places (backcountry) or at the right times (not during summer), you can find the solace necessary for the enjoyment of wild and beautiful nature. But the plot is basically a tragedy – the clumsy lover has smothered the target of his affection. How else could you describe a “bear jam” as people gawk and snap pictures from their SUVs at a lumbering grizzly? This is a pathological kind of love. Once, in Arches, Abbey chucked a rock at a rabbit, which landed square between its eyes, killing it instantly. It made him feel like he was no longer “a stranger from a different world” (Abbey 1968). He was basically alone in the park. One man, or a few, can hardly ruin things. Even if it was a senseless and stupid act, it was negligible. Abbey might symbolize that pivot from a time when humans were small, and nature was big to our present age when things are reversed. It’s the Anthropocene,

 This is the language from the 1916 National Park Service Organic Act.

1

13  On Leaving and Receiving Traces: Thoughts on an Un-professional Code of Ethics

231

stupid.2 It calls, among other things, for a new ethic – one that is, sadly, less wild than the ethic granted to Abbey. In what follows, I develop my thoughts about a code of ethics through the use of personal narrative. I recognize that this is unusual. Indeed, in many ways, to codify something is to depersonalize it and organize it into a system. Codified law, for example, is presented in statutes that stand apart from any particular judgment. I think of ethics codes in a different light, however. My sense is that an ethics code is basically a checklist, that is, a set of reminders about what might be important to consider as one faces a novel and challenging context that requires a decision about what constitutes the right thing to do (see Gawande 2011 for further support of this idea). In other words, I see ethics codes as rules of thumb or guides and not algorithms or recipes that spell out a formula for right action that anyone can simply follow. This view of ethics codes puts the particular person back in the picture in a central way. The items on the checklist are usually the general principles stated in a depersonalized and context-free way. Yet, for the checklist or code to function, someone must think about how these general principles apply to a particular situation. After all, a longstanding conundrum in ethics is what to do when two or more general principles seem to clash in a particular case (see, for example, Jonsen and Toulmin 1990). So, my foregrounding of personal narrative and contexts highlights these important dimensions of ethics codes. My approach limits my focus to just a few parks within the U.S. context. Though beyond the scope of this paper, it would be interesting for others to draw from their own experiences to see whether and how this view of ethics codes might work in different contexts.

13.2  Echo Canyon The visitor center parking lot at Chiricahua National Monument in southeastern Arizona was full when we arrived after a breakfast of oatmeal at our campsite. But we squatted in the handicap spot while a Park Ranger grabbed us a map and our Junior Ranger books. Then up the winding mountains, where, as a boy in the 1830s, Geronimo and other young Apache would race up the steep slopes holding onto a mouthful of water and then race down to spit out their water at the foot of the warriors to demonstrate how they can breathe through their noses and go without water for long distances. They must be able to hunt the fleet-footed deer without rest until it is killed (Geronimo 2005). We, though, had the minivan and gallons of water, which we drank liberally following the Ranger’s orders. The first rule of the National Parks: Thou shalt be safe. 2  Compare our reality with that of Emerson, who wrote in his 1836 Nature that the operations of man “taken together are so insignificant, a little chipping, baking, patching, and washing, that in an impression so grand as that of the world on the human mind, they do not vary the result” (Emerson 2017, p. 8).

232

A. Briggle

At the top of the mountain, it felt as though the sky was an open jar above us, and the whole cosmos would soon fall in. Below us was a kneaded landscape – the basin and range province of the American Southwest looking like a tablecloth some raving God had pushed at one end to create periodic folds on the sun-scorched table of rock. Twenty-seven million years ago, the Turkey Creek volcano erupted right here so violently that it shot ash tens of miles into the air at supersonic speeds. Cubic miles of molten rock burst out and draped the land in pyroclastic flows. The explosion ripped a 12-mile-wide hole in the earth – the ash flash-froze into a layer of rock called rhyolite. Then water and wind-sculpted a landscape of towering pillars, balancing rocks, pinnacles, and hoodoos. It left a voodoo land, eerie, inviting – knuckles and fingers and odd ducks and slender top-hatted gentlemen made of rock waiting for you to come explore. We sat there, windows open, sprawled in the minivan munching on granola bars, reading these facts in our Jr. Ranger books so that we could earn badges for our vests. The second rule of the National Parks: Thou shalt learn about the place and share what you have learned. Then we set out for a hike that would first go down, then come up. Max had just turned 9. Lulu was 4. I should have thought more about the way back up – with an emphasis on UP. I should have foreseen the tears, the sore legs, the whining, the children on my shoulders for the return ascent. But I didn’t think about it. Echo Canyon beckoned us. Now was the only time imaginable – it was more than enough. The kids skittered around the curious rock formations in a kind of frenzy. There were only a few German tourists and one lonely Park Ranger. Otherwise, the place was ours. Max was a mountain lion posing, preening, relaxing his jaws into a growl. Lulu was Rapunzel pining for the world below her as she dramatically paced a little rock tower that the Civilian Conservation Corps (CCC) had artfully built during the Great Depression. “Hey, stay on the trail!” suddenly the stern voice of the Ranger above chastising my rock-hopping kids. The third rule of the National Parks: Thou shalt leave no trace. Staying on the trail is an example of minimizing one’s impact on the land. We learned all about other examples in our Junior Ranger books, including packing out our garbage (not littering), not feeding the wild animals, and not making loud noises. Time to head down, bend around the corner out of sight into Echo Canyon. No one is around – a kind of electricity builds in my bones. The conditions are ripe – I can feel it. We eat a lunch of sandwiches and chips inside of a natural grotto carved at a 45-degree angle into huge slabs of rhyolite. Perhaps we should turn around, but the trail has us all in a trance – just one more corner, just one more…we want to see what we can see. Finally, we stop at a hairpin curve and sit down, Lulu on my lap and Max leaning up against my side. We chat in good spirits, but I am inspired by the silence around us – there is not even the distant sound of an airplane. I whisper: “Let’s be absolutely quiet just for one minute and see if we turn into stones.” To my surprise, the kids oblige with a simple nod as if to say, that’s a good idea. Our bodies relax. I stare ahead and up on the side of a cliff facing us where an enormously tall and slender ponderosa pine tree slashes a thin line against the blue sky. There isn’t even the whisper of a breeze. Then down to our right, we hear it: a

13  On Leaving and Receiving Traces: Thoughts on an Un-professional Code of Ethics

233

hallow WHOOP-WHOOP-WHOOP sound. We turn to see a gigantic raven, and the sound is the beating of his powerful wings against the pliant air  – WHOOP-­ WHOOP – with just five strokes of his wings, he has defied gravity and reached the teetering topmost branch of the ponderosa, where we see now a tuft of black set against its cinnamon bark. The kids do not break their vow of silence – they just stare wide-eyed. Henry David Thoreau once wrote about a hike up a mountain in Maine  – “Contact! Contact!” he practically yelled (Thoreau 2013, p. 8). I understood. Yes, contact. Ralph Waldo Emerson talked about how the currents of the Universal Being can flow through you  – about forging a “more original relation to the universe” (Emerson 2017, p. 10). John Muir talked about how people do not live by bread alone (Muir 2017). They require spiritual as well as material sustenance. I understood it all. We had made contact there in Echo Canyon. The fourth rule of the National Parks: Thou shalt be transformed. The rest of my essay is a reflection on this dimension of the ethic. It is the other side of the coin, the trace that is left on your soul as you practice not leaving traces in the world. The key to this is mindfulness and receptivity. Only by slowing down and listening can you be open to the transformative potential of these majestic natural areas. As the poet Rainer Maria Rilke says upon seeing the statue of Apollo’s torso: du musst dein Leben aendern! “You must change your life!” (Mitchell 1989). Muir wrote that “the mountains are fountains of men,” they have the ability to mold us into our better selves (Duncan and Burns 2009). But how does this happen?

13.3  Trailhead: South Kaibab We are at peace, idly chatting, sketching in our Junior Ranger books in the shade of the pinyon. Before us spreads the unspeakable manifolds of the Grand Canyon. A crow dives and is lost against the rubble of the Bright Angel Shale. It is warm – hard to believe 2 days ago we had watched the snow falling upward out of the canyon, pushed out by the mysterious convections of such an enormous gash in the earth’s crust. We had hiked down as far as a lumbering entourage of mules at a hairpin turn and then back here for lunch. I have that feeling again. I am starting to remember what it is like at the center of things. There is so much detritus to scrape through to get around to making contact. We so easily lose the way. It doesn’t take much to tilt us back into acedia, a state of spiritual sloth that can manifest as distraction, hyper-activity, or laziness. For example, a wave of tourists arrives from the shuttle at the trailhead. A woman can be heard yakking on her cellphone, “Well, it was such a long drive up here that Harry told me it had better be one hell of a hole… What??... No, I said that he said it had better be impressive, you know…ha ha!” The fourth rule is the most important. But it is also the one left largely unspoken, and it is certainly the most difficult to follow. Park Rangers talk about staying on the

234

A. Briggle

trail, but not about veering out of the ordinary. They help you learn scientifically about nature, but not morally from nature. You might say that the trouble with America’s national parks has always been Americans. Early visitors to Yellowstone, for example, poured soap into the geysers and stuffed them full of logs to watch the deadwood get launched skyward. At Mesa Verde, they’d set off sticks of dynamite just to scare away the rattlesnakes, which had the unfortunate side-effect of interrupting all the looting of artifacts. By 1889, Yosemite was already taking on the tourist-trap, carnival atmosphere that had ruined Niagara Falls. Roadway tunnels had been carved through some of the sequoias, the valley was littered with tin cans, and on summer nights, the cheering crowds were treated to a “firefall” where a kerosene-fueled bonfire was sent (also with the use of dynamite) careening over the cliff face of Glacier Point (Duncan and Burns 2009). As I write this now, a partial government shutdown has left many of the parks woefully understaffed. The trash and human waste are accumulating, and people are driving off-road. The Joshua trees, already victims of climate change, are being vandalized. The rocks are tattooed with graffiti. But even if we set aside these obviously destructive behaviors, we are left with the far more common obliviousness of tourists who seem to have no idea how to get their souls resonating in tune with the grandeur of the places they are experiencing through their windshields and cameras. I have a friend who calls this “existential illiteracy,” the condition of not knowing how to be in the world. As with any code of ethics, there may be internal contradictions going on here. Take, for example, the first commandment to be safe. Muir seemed at his best, following the fourth rule of transcendence by violating the first rule. He famously swayed in the tops of the redwoods during a thunderstorm. He hiked to the edge of the “dangerously smooth and steep” waterfall at Yosemite, “concluded not to venture farther…but did nevertheless” (Duncan and Burns 2009). He was nearly crushed in an avalanche and nearly drowned on another occasion. More generally, I think of Geronimo and the other indigenous peoples who were dispossessed in order to create the wilderness enshrined in so many of our National Parks (Spence 1999). They certainly did not live by a “leave no trace ethic” because they lived on the lands now protected from any material uses. Can we be materially disconnected from the land and truly develop a spiritual connection to it? Or must we have to face the existential threat of failed hunts to really tune into the cosmos? If you are a tourist with a camper full of pop tarts, things may be too light and soft to leave much of an impression on your soul. I think again of Abbey in Arches when only a few brave and hardy souls ventured out on those dark dirt roads. Once, he got lost with no water and only barely escaped a hot, torturous death on a god-­ forsaken rock. Now that kind of experience is going to leave a mark on your soul. But it is precisely the kind of experience the National Park Service wants you to avoid because it is so dangerous.

13  On Leaving and Receiving Traces: Thoughts on an Un-professional Code of Ethics

235

13.4  The Chisos Mountains In the U.S., the National Parks were conceived of as grand experiments in democracy. They are to be something like the palatial gardens of old Europe, except wild and open to everyone. This often leads to the unfortunate assumption that democracy means all values are equal, as if we can equate that woman yakking on her phone with Muir swaying in the trees. There is something elitist about the fourth rule – it requires distinguishing the noble from the base and aspiring to loftier states of being. This quest can be compatible with some versions of democracy, but the American version is often too libertarian for this. The National Parks are places of supreme natural beauty and significance. They are a testament to the fact that not all of nature is created equal. To get to Big Bend, for example, we drove through the desolate flats of the Permian Basin, where we pump the oil that drives us, if we want, away from those sacrifice zones or the over-­ stuffed cities into such sublime locales as the Chisos Mountains, the most glorious place in all of Texas. The National Parks are not democratic  – they are nature’s royalty. No wonder Americans don’t know how to interact with them. Contradictions abound. The National Parks are secular temples. They are holy sites managed by scientists. There is a spot on the trail up the Chisos where you can turn back and see the faraway Chihuahuan Desert framed as a waving mirage through the famous ‘keyhole’ formation of rocks. It’s just as elevating as the stained glass in any cathedral. The parks want us to be pilgrims on a spiritual quest, not tourists. But there is no religious authority, no common dogma to enforce this posture. No standard rituals to follow. If you want, just come and treat the Grand Tetons like Disneyland…a place for some fun and recreation, a place to snap selfies to post on Facebook. There is no priest and no pulpit to judge you, only the stoic mountains behind your grinning mug. They couldn’t care less. More contradictions, this time between the second and fourth rules. When we are told to learn about the parks, the kind of knowledge intended is usually scientific. The modern scientific worldview post-Newton and Darwin holds that nature is matter in motion and species are shaped by blind forces. There is no purposiveness. As noted above, you can learn about nature, but can you learn from it? Now, there is not necessarily a contradiction here between learning and transformation. Indeed, this is quite tricky, because learning the science is absolutely vital to those moments of spiritual contact. What makes the Grand Canyon so grand is the deep time on display, and we only know this because of geology (see Pyne 1999). Similarly, the Chisos Mountains took on new layers of power and meaning as we learned about their own geological origins and their interactions with the vast desert below. The wildlife biologist and philosopher-poet Aldo Leopold (1949) argued that science opens our eyes to the wonder, beauty, and value of nature. An ugly, useless desert becomes a vital, thriving community when we take a closer look through the lens of ecology and other sciences. Even the Permian Basin is full of treasures if we would stop and study it, rather than zoom through at 70 mph. Yet, Leopold also

236

A. Briggle

recognized that these scientific insights have to be brought to life through poetic prose. This is why Rachel Carson’s landmark Silent Spring (1962) was so effective. The science is vital, but if it is left to just the dry facts, it can leave us disconnected. We have to get plugged in. For this to happen, the science needs to be brought to life through stories that resonate in our bones – after all, we are narrative creatures.

13.5  The Emma Dean Our pop-up camper is called the Emma Dean. Lulu said, that’s a terrible name and Max agreed. But, I told them that Emma Dean was the name of John Wesley Powell’s wife. The man who lost an arm in the civil war and went on to lead the U.S. Geological Survey and practically invent American ethnology. He was the first to successfully navigate down the Colorado River through the Grand Canyon, and when his two-armed companions would rest after another harrowing day of rapids, he would clamber up the sheer faces of rock to record notes. His wooden boat was named the Emma Dean (Powell 2013). Now the name didn’t seem so bad. But it is ironic: one mass-produced pop-up camper (with an air conditioner, though I swear I have never used that feature!) out there in the twenty-first century R.V. America on the highways vs. the wooden boat that braved the rapids before Glen Canyon and Hoover dams were even a twinkle in an engineer’s eye. The point is that we may no longer have the material culture or the moral character required to receive the transcendent gifts our great natural places might want to give to us. Perhaps we must make do with other kinds of gifts: Powell was lucky to get out of the Canyon alive; I was lucky to get out of there with only a 100 bucks of souvenirs. The Emma Dean has a roof that pops up and two beds that stick out about 5 feet above the ground. It has a three-burner propane stove, a little sink, a heater, a little table and couch, and lights and outlets (even a mini-fridge) if you can get yourself to an electrical hook up. When I got it all set up at our first camping site on the Rio Grande – with Mexican cows grazing in the reeds on the other side, it occurred to me: just how is this set-up affording us a chance for that original relationship with the universe? Perhaps we had just hauled down behind the minivan, the very thing that will thwart our ability to follow rule number four. Let’s grant that humans are technological creatures. So, being transformed doesn’t require us to go naked. But let’s also grant that some kinds of technologies do indeed get in the way of following the fourth rule. That leaves the question: how much and what kinds of technology are needed for making contact like Thoreau did? On this question, I was intrigued to see just how many different answers people gave by observing their equipment and their behaviors. Some had massive R.V.s, and others had only a hammock. No one, including the Rangers, talked about what constitutes appropriate technology…all answers to this vital question were given implicitly in the form of assumptions and unexamined ideals. I know a guy who will

13  On Leaving and Receiving Traces: Thoughts on an Un-professional Code of Ethics

237

never sleep in a camper because that’s not “the real deal,” but he will stream movies on his phone at night in his tent. Minimalism should be the rule of thumb, which indeed casts grave doubts on the wisdom of the Emma Dean. I felt as if the Emma Dean was our space station and our hikes were like space walks into the black void of the wilderness, never too far from home base, always tethered. We were crammed to the gills with stuff: bags of medicine, clothing, provisions, jugs of water, lights, batteries, first-aid kits, cooking gear. We settlers invent civilization anew everywhere we set up camp. Recognizing this, I tried to manufacture limits as best as I could. I avoided spots with electricity hook-ups. I rarely used the heater, never the blanket warmers (a ridiculous bit of luxury). We never used a generator, which are noisy and foul machines that should be banned from National Parks. As I gained experience, I found the urge for primitivism growing. We avoided the stovetop as much as possible, preferring to cook over the open fire, especially when stoked by juniper wood. We dragged our mattresses outside to sleep under the stars on the north rim of the Grand Canyon. Eventually, we ditched the Emma Dean altogether and went tent camping. It made things more difficult in valuable ways. My strictest rule was: no electronic screens. The average American adult spends 10 hours daily staring at screens. Drop all of this, and it is painful at first (the junkie needs a fix), but then clarifying and liberating. For weeks at a time and even on long road trips, we had no tablets, cell phones, or televisions. I only used my phone for pictures, but I probably took too many photos. The camera can invite one to notice things with more care, but that requires a disciplined kind of use. I found that mostly the camera interfered with a more direct contact or a more original relationship. Digital cameras, with their near limitless storage capacity, suck one into the screen to review the pictures, even add filters and special effects to them. This is the allure of our hyper-reality, which can make nature seem drab by comparison. But the Emma Dean did have one advantage in terms of fostering adherence to the fourth rule. When the wind picks up, she starts rocking around. Max wakes up, frightened. “Could the camper flip over?” he asks. “No,” I say. I hope that I sound certain of that answer because, in reality, I am not certain. It does feel like we might get capsized. The wind rifles through the mesh tenting all around the camper. The beds are suspended in mid-air, and that makes it feel even more like sleeping in the elements than if we were just laying outside on the ground under the stars. At least then, the ground would be directly underneath us. As it is, we are surrounded by the wind. It’s like sleeping – or not – on kites up in the storm. I lay awake, wide-eyed. Contact! Contact! I am thinking  – too much contact! Lulu, however, got a great night’s sleep.

238

A. Briggle

13.6  The Garden The fourth rule isn’t a rule. It’s the pilgrimage from the profane to the sacred. Crossing the Permian into Big Bend. Crossing the threshold from the street to the Cathedral. Leaving the day-to-day distracted consciousness for the sublime. The profane world makes the pilgrimage possible  – we must honor that fact. But the sacred realm is the reason, the end, the purpose of our being. We so easily forget that. The National Parks are there to remind us. Inside all of those cars idling at the entrance to Rocky Mountain National Park are human souls yearning for a way to connect with the Earth. If we could pull apart the minivan and watch all its pieces wind back in time, we would find them dunking eventually back into the secret pockets of the Earth. It is such a mediated experience of our home planet. The quest is to get from there to something more direct and deeper. Convenience is our way of life. I mean that literally as “meeting together.” The computer I am typing on now is the meeting together of innumerable people with specialized tasks assembling materials from places far-flung. All of those materials are the hidden means that give me the commodity. So too with a package of hamburger at the grocery store. The cool air from my air conditioner. We convene around us a life of enjoyable goodies, detached from the places that made them possible. Geronimo did not live this kind of life. He lived a life in a place. That is, until his people were dispossessed. It little matters where we live – what ‘land’ we happen to pave over and drive along for our daily commute – because our sustenance is convened from all around. Everywhere is, then, turned into the same nowhere. Except for special places like the National Parks that retain their unique character against this homogenized background. The upshot is that we take for granted a radical separation of ends and means. This tends to deaden our sense of place. And it kills practices and traditions that are simultaneously means-ends. The hunting of deer. The preparation of the meal from scratch. The making of shelter, jewelry, and clothing. The ritual dance. Compare that to grocery shopping, the frozen dinner or fast-food meal, renting an apartment, buying a necklace online, and ‘clubbing.’ For cultures not beholden to convenience, life may be more difficult in some ways, but the sacred and the ceremonial are never far away. For us, though, this is a problem. Where is deeper significance to be found? Neither in the foreground of commodities – the new toys from Target or the new car in the driveway – nor in the background of factories and mines from which our conveniences ultimately stem. We are awash in profanity – disposable diapers, strip malls, parking lagoons, internet porn. So here we are. Cynical. Comfortable. Untethered to tradition, practice, community, and place. Tethered to more desires that become more needs. Distracted and flitting episodically across an infinity of information. Worried. Amused. Uncertain. Spiritually starved. It is little wonder millions of us turn our minivans in the direction of these magical places. But aren’t they designed to fail us and for us to fail them? We will think of them as objects of consumption. And they will think of us as

13  On Leaving and Receiving Traces: Thoughts on an Un-professional Code of Ethics

239

an army of invading foreigners. We are aliens. Everything we do there is sub-­optimal simply by virtue of our presence. Leave no trace is the ethic of ghosts. Further, if this is our malaise – that life has gotten all too light and easy – then the solution wouldn’t be to visit places where the ideal behavior is to be as light and as invisible as possible. Thoreau’s line about making contact was in contrast to the polite shuffling through exhibits in a museum with their signs reminding us not to touch anything. Maybe the fourth rule points us to a return to the land in a more material and economic sense. This is why Abbey killed the rabbit – to be no longer estranged from the land, to become bloodily and vitally woven in with the fate of its living things. It’s why Muir scaled the sequoia. But this is hardly conduct becoming of a modern-day recreational tourist. There are too many of us. And the National Parks are so far away. I was privileged to visit many of them, but I am back now with the rest of us in the workaday world. It almost seems like a dream now, to have the leisure time to strike out on a hike for points unknown with just a sack lunch. I realize that “roughing it” has become a privilege of the leisured class. Thoreau already spotted this dynamic: we set a trap to catch comfort and freedom, but we got our own legs caught in it. I was able to escape the trap for a while, and I think I got what I needed. A recharge. A reorientation. A reminder. By their very nature, of course, the National Parks are not places to stay and make a living. The contact made there will be fleeting. Treat it like the spark that quickly vanishes but can, if carefully tended, ignite a more enduring fire. The trick about the fourth rule is to scoop out sacred places in one’s day-to-day existence back home. The pilgrim returns and seeks to cultivate what Albert Borgmann (1984) calls focal practices. The idea is to mend the rift between means and ends caused by a life of convenience, even if only in small ways. Pause in the driveway and look at whatever stars make it through the city lights, then close your eyes and picture yourself on a ball wheeling through the cosmos. You and the raven, together wheeling. Max, Lulu, and I have chosen the garden. I am not skilled with my hands, but together we learned how to build two raised garden beds with simple materials, stocked with good soil and some compost from an eccentric neighbor. We carved them out of the Bermuda grass on the front lawn. Tending to a garden requires mindfulness and discipline. It must become a habit. It is, in other words, a focal practice that is both a means to an end (how wonderful homegrown tomatoes taste!) and an end in itself (how good it is to watch the small fingers of your children pluck the beans they had planted months ago). Go plant a garden. Go take a sewing class. Go on a jog. Build a simple and intelligible world invested with your own meaning, a world where you are no longer a stranger.

240

A. Briggle

13.7  Conclusion These have been some preliminary thoughts about an unprofessional code of ethics for the National Parks and their pilgrims (that is, tourists). I am not sure we can or even should codify such thoughts. Nonetheless, I do think we need to embrace and enact an ethic of receiving a trace. So, I conclude with some remarks about how this kind of ethic might be implemented if not codified. There are two main parties involved in an ethic of receiving a trace: the pilgrims and the National Park Service. For the pilgrims like me and my children, the primary imperative is mindfulness. I’ve tried to show how this could take shape in a variety of ways, especially about what technologies one does and does not use in the National Parks. From cameras to campers, the pilgrims need to consider not just safety, education, and leaving no trace. They also need to consider whether or how a given technology might help them tune into the universal current of Being. I suggest that here a kind of minimalism is probably a good rule of thumb. It is a little silly to say that “be open to spiritual awakening” should be written into a code of ethics, like an item on the checklist. I agree, which is why I am not sure about codifying this unprofessional ethic. Yet isn’t that the main reason to visit the National Parks? We flock there because we are looking for something to shake us up and out of our day-to-day distracted lives. We are seeking some reminder of deep time and vastness. So, if it takes an item on a checklist to remember this, well, why not? In my experience, the National Park Service, in its various communications with the public, hardly breathe a word about their deeper spiritual purposes. You can gorge yourself on knowledge about cougars or canyons, but there is little guidance on what it all means or on how to make contact like Thoreau. Shortly after our adventures in the National Parks, I met some social scientists working with Park Rangers on a study. They gave park visitors a notebook and a camera and asked them to write down their impressions and snap photos. Then they collected the notebooks and cameras and wrote up the results of how many times words like ‘sublime’ or ‘peaceful’ were used and what kinds of pictures were taken. I asked them why they wouldn’t also try to help people get better at expressing and understanding their feelings of beauty, wonder, and connection. I said, “Isn’t your study a little like a wine tasting without the sommelier to help describe the flavors and educate the palette?” Can’t we get better at appreciating nature like we can with wine? The thought had never occurred to them. This points to ways of implementing the ethic of transformation. As my colleague Robert Frodeman (2007) has argued, philosophers, artists, and other humanists can play a vital role as these ‘sommeliers’ in the National Parks. There is already an artist-in-residence program within the National Park Service. This is a good foundation on which to build a much more robust humanist-in-residence and even theologian-in-residence program. The task for these Park Rangers would be to help cross the synapses between scientific information and the human spirit. They could offer formal talks on, say, the aesthetic dimensions of geology (e.g., Frodeman

13  On Leaving and Receiving Traces: Thoughts on an Un-professional Code of Ethics

241

2003), and they could wander around offering informal conversations with pilgrims, acting as guides to help them to deepen and enrich their experiences. This would be an example of what we have been calling “field philosophy” (Frodeman and Briggle 2016). These humanists could also help to articulate and integrate the ethic of receiving a trace on a policy level. I am thinking, for example, about the over-crowding of the National Parks. This is a devilish problem that brings values of openness, access, and inclusion into conflict with the values central to the ethic I have been articulating: quietude, solace, slowness, and the space between the listening ear and the whoop of raven’s wing. The National Park Service already places various limits on visitations, such as limited camping spaces and backcountry permits. Determining whether more limits are needed is a task that requires thinking about the ethic of making contact. This is where humanists and theologians can help. I chose to tell stories because this is ultimately about people weaving experiences into their life narratives. The spiritual transformations I have tried to narrate are too often left unspoken as inarticulate urges, and as a result, they don’t get woven tightly into the fabric of our sense of self. The moment comes, and we aren’t ready to receive it, or it fades all too quickly and is forgotten. It doesn’t leave a trace. We all know those transcendent moments are why we preserve and visit these special places. Yet, we don’t have the language or structures to understand, express, and discuss these deeper meanings. It would be better if we brought them out into a reasoned and explicit conversation about what the National Parks mean and how they might make us better people and a better nation.

References Abbey, ed. 1968. Desert Solitaire. New York: McGraw-Hill. Borgmann, Albert. 1984. Technology and the Character of Contemporary Life. Chicago: University of Chicago Press. Carson, Rachel. 1962. Silent Spring. New York: Houghton Mifflin. Duncan, Dayton, and Ken Burns. 2009. The National Parks, America’s Best Idea: An Illustrated History. New York: Alfred A. Knopf. Emerson, Ralph Waldo. 2017 (1836). Nature. Los Angeles: Enhanced Media Publishing. Frodeman, Robert. 2003. Geo-Logic: Breaking Ground Between Philosophy and the Earth Sciences. Albany: State University of New York Press. ———. 2007. The Future of Environmental Philosophy. Ethics & the Environment 12 (2): 120–122. Frodeman, Robert, and Adam Briggle. 2016. Socrates Tenured: The Institutions of 21st Century Philosophy. New York: Rowman & Littlefield. Gawande, Atul. 2011. The Checklist Manifesto. New York: Macmillan. Geronimo. 2005 (1906). My Life. Mineola: Dover Publications. Jonsen, Albert, and Stephen Toulmin. 1990. The Abuse of Casuistry: A History of Moral Reasoning. Berkeley: Berkeley University Press. Leopold, Aldo. 1949. A Sand County Almanac. Oxford: Oxford University Press. Mitchell, Stephen, ed. and trans. 1989. The Selected Poetry of Rainer Maria Rilke. New  York: Vintage Books.

242

A. Briggle

Muir, John. 2017 (1912). The Yosemite. New York: The Century Co. National Park Service. 2019. National Park Service Visitor Use Statistics. https://irma.nps.gov/ STATS/. Accessed Nov 2019. Powell, John Wesley. 2013 (1875). The Exploration of the Colorado River and Its Canyons. New York: Simon and Brown. Pyne, Stephen. 1999. How the Canyon Became Grand: A Short History. New York: Penguin. Spence, Mark David. 1999. Dispossessing the Wilderness: Indian Removal and the Making of the National Parks. New York: Oxford University Press. Thoreau, Henry David. 2013 (1864). The Writings of Henry David Thoreau: The Maine Woods. New York: Houghton Mifflin. Adam Briggle Associate Professor and Director of Graduate Studies, Department of Philosophy and Religion, University of North Texas, USA; [email protected]. Adam Briggle holds a PhD in Environmental Studies and works as a field philosopher on issues of environmental ethics and policy. He is particularly interested in topics pertaining to energy and climate.  

Chapter 14

The Responsibility of Researchers and Engineers: Codes of Ethics for Emerging Technologies Armin Grunwald

Abstract  In recent decades, we witnessed a boom of words and notions such as ethics, values, ethics commissions, scientific integrity, vaulted by the postulate for responsibility. In particular, responsibility has become an issue of research policy, computer science and engineering, funding agencies expecting sound ethical conduct, and public awareness of ethical issues, often followed by the public claiming involvement of stakeholders and citizens. The field of codes of ethics also flourished. Many professional associations but also institutions such as universities and hospitals developed and adopted codes of ethics. This development gives rise to reflect the intentions, motivations, purposes, and impact of codes of ethics. The first intention of this chapter is to embed codes of ethics into the broader fields of professional ethics and of the governance of technology at large. Taking care of responsibility for developing and using emerging technologies implies, at this level, asking for prudent and proper approaches in order not to overburden engineers and researchers with unfulfillable responsibility. Therefore, its second intention is to look for the specific responsibilities of engineers in a complex field of technology governance with shared responsibilities. Codes of ethics then will be conceptualized as major elements of making this responsibility operable at the level of professions such as engineers and researchers. Keywords  Engineering ethics · Technology governance · Uncertainty · Responsibility · Common good

A. Grunwald (*) Karlsruhe Institute of Technology, Karlsruhe, Germany e-mail: [email protected] © Springer Nature Switzerland AG 2022 K. Laas et al. (eds.), Codes of Ethics and Ethical Guidelines, The International Library of Ethics, Law and Technology 23, https://doi.org/10.1007/978-3-030-86201-5_14

243

244

A. Grunwald

14.1  Codes of Ethics: Solution to What Problem? At the end of this volume, after all the rich conceptual analyses and case studies, a step back shall be taken in this final chapter. The question is the place of codes of ethics in the professions of engineering, computer science, and health care. To this end, I will focus on responsibility as a frame of reference because dealing responsibly with emerging technologies is a challenge common to the different applications covered in this volume. Codes of ethics are an eminently important approach to making the concept of responsibility of scientists, researchers, and engineers’ work. One way of taking a step back is looking at historical developments. Until the 1990s, the dominant narrative was regarding technology as value neutral, just as a set of instruments and tools, which could be used for morally good or bad purposes. In this case, having codes of ethics for engineers and scientists for strengthening their moral capability wouldn’t make sense, except for ensuring scientific integrity. These groups would be free from any responsibility for society except doing their job well, while the users of technology would have to bear the full responsibility. A code of ethics for the users of technology could have been regarded as sensible, but not for its makers. This story might sound like a weak and strange echo of a time long ago. In the meantime, numerous case studies have uncovered the normative and value background of the design and development of technology, making it the subject of ethical reflection (e.g., van de Poel 2009; Radder 2010). Consequently, engineers and researchers have been pulled into moral reasoning in many fields of technology, e.g., at the occasion of Artificial Intelligence (AI), genetic editing, or care robots. A similar story holds for science. While previously science was regarded as value-­ neutral in positivistic interpretations, many scientific developments obviously incorporate heavy moral questions. Examples are the atomic bomb with the subsequent debate on the responsibility of physicists, the genetic modification of organisms followed by bioethical discourses and public debates, the cloned sheep Dolly in 1997, and the birth of twins in China after intervention into their germline in 2018. In recent decades, we witnessed a boom of words and notions such as ethics, values, ethics commissions, scientific integrity, vaulted by the postulate for responsibility. Compared to the dominance of the value-neutrality narrative of science and technology of earlier decades, today, the opposite story seems to be dominant. Ethics and values are seen everywhere in new science and technology and their consequences. Convictions that ethics and responsibility are crucial for steering the development of research and innovation towards a good and sustainable future are widespread. The approach of Responsible Research and Innovation (RRI), sometimes also shorter Responsible Innovation (RI), expresses this emphasis on ethics and responsibility (von Schomberg and Hankins 2019) but also the optimism that shaping technology by involving ethics will be possible. Approaches such as Value Sensitive Design (Friedman et al. 2006) and Design for Value (van den Hoven et al. 2015) are explored, aiming to make RRI operable by fostering the cooperation of ethicists, engineers, and computer scientists.

14  The Responsibility of Researchers and Engineers: Codes of Ethics for Emerging…

245

The first wave of the ethics of responsibility on science and technology (e.g., Jonas 1979; Unger 1994) was more or less a philosophical endeavor of creating awareness. Currently, responsibility has become an issue of research policy, computer science and engineering, funding agencies expecting sound ethical conduct, and public awareness of ethical issues, often followed by the public claiming involvement of stakeholders and citizens. This development was prepared in the huge United States and international research programs, the human genome in the 1990s, the human brain program a decade later, and the National Nanotechnology Initiative (cp. Grunwald 2014a). In Europe, the first approaches to the already mentioned RI and RRI movements emerged about 15  years ago. An ethical code of conduct was among its first results: the code of conduct for responsible nanosciences and nanotechnologies research approved by the European Commission. The field of codes of ethics also flourished beyond this widely perceived event. Many professional associations but also institutions such as universities and hospitals developed and adopted codes of ethics. This development gives rise to questions about the intentions, motivations, purposes, and impact of codes of ethics such as: • What can be said about the specific responsibility of professions such as engineers, computer scientists, medicals, care personnel in the context of emerging technologies? • What roles and places do codes of ethics have in professional ethics, e.g., of engineering? How do they relate to object-oriented fields of ethical reflection such as the ethics of technology or the ethics of care? • Why are they needed, what problem can they solve, and which challenges can they meet? • How can codes of ethics “make a difference”? Do they make a difference, and, if yes, what difference? Can this difference be measured? All these questions remind us that codes of ethics in the fields of engineering, computer science, care, or research always are a means to an end. They are expected to contribute to problem solving and ‘make a difference’, otherwise they would not be needed (cp. Chap. 12 in this volume). Therefore, they must be reflected and monitored whether they -not only by intention, but also in fact- support problem-­ solving in real-world practices. Many of these questions have been dealt with in the previous chapters of this volume through various case studies. In this final chapter, a more general perspective shall be taken. Its first and overall intention is to embed codes of ethics into the broader fields of professional ethics and of the governance of technology at large. Taking care of responsibility for developing and using emerging technologies implies, at this level, asking for expectations and their justification. Utilizing the currently high expectation regarding ethical conduct and responsibility of e.g., engineers and researchers, for preparing a good future, there is a risk of over-­expectations and of over-burdening researchers and engineers with responsibility. Therefore, a second intention of this chapter is to look for the specific responsibilities of engineers in a complex field of technology governance with shared responsibilities.

246

A. Grunwald

After a short consideration of the concept of responsibility and its various dimensions (Sect. 14.2), the distribution of accountabilities and responsibilities among actor groups, including engineers, in the field of technology governance will be described (Sect. 14.3), allowing embedding the responsibility of engineers and researchers into a broader field and asking for their specific responsibilities (Sect. 14.4). Codes of ethics then will be conceptualized as major elements of making this responsibility operable at the level of professions (Sect. 14.5). Finally, codes of ethics will be put in context briefly: with regulation addressing common and binding rules (Sect. 14.6.1) and with some perspectives for developing global codes of ethics (Sect. 14.6.2).

14.2  Responsibility in the Governance of Technology Making responsibility an issue in everyday life as well as in politics, in public debate and in science, needs a motivation. Usually, discussions about responsibility occur on the occasion of a diagnosis that something went wrong in the past or could go wrong in the future. More positively speaking, the motivation is to arrange accountabilities and responsibilities in a certain field in a manner promising a good and desired development in the future. Hence, making responsibility an issue needs a pragmatic background of some concerns, uncertainties or conflicts, and a normative motivation to arrange things in a good or at least better way. Motivating concerns occur, for example, if new opportunities are made available by emerging technologies for which there are still no rules or criteria for the attribution of responsibility at hand, or if the latter are a matter of controversy. The purpose of making responsibility an issue is to overcome these uncertainties and achieve consensus on how to organize and distribute responsibility in the affected field, e.g., making use of advanced Ambient Assisted Living (AAL) technologies or care robotics in care homes. The rationale behind debates around responsibility addressing future issues is taking care of an ethically sound and desired development in the field under consideration: “Responsibility ascriptions are normally meant to have practical consequences” (Stahl et al. 2013, 200). The significant role of the concept of responsibility for shaping the scientific and technological advance and dealing with its consequences in an ethically sound manner is obvious and was often mentioned (e.g., Jonas 1979; Lenk 1992). Providing orientation by clear and transparent assignment of responsibility, however, needs a common understanding of responsibility (Fahlquist 2017). Responsibility is the result of social processes, namely of assignment acts, whether actors take responsibility themselves, or the assignment of responsibility is made by others. The assignment of responsibility follows social rules based on ethical, cultural, and legal considerations and customs (Jonas, 1979, 173). They take place in particular social and political configurations involving and affecting specific actors in specific roles. Accordingly, a five-place reconstruction for discussing issues of responsibility in scientific and technical progress was proposed (Grunwald 2014b based on Lenk

14  The Responsibility of Researchers and Engineers: Codes of Ethics for Emerging…

247

1992): someone (an actor) is made responsible for something (results of actions or decisions) before an instance (a social entity) with respect to rules and criteria (e.g., ethical rules), and relative to the knowledge available. The first three places indicate the fundamental social context of assigning responsibility and constitute the empirical dimension of responsibility. The fourth place opens up its ethical dimension, while the fifth place refers to the knowledge available about the object of responsibility, which forms the epistemic dimension of responsibility. The resulting ‘EEE-model of responsibility’ (Grunwald 2014b, 2019a, formulation following Grunwald 2021) comprises: • The empirical dimension of responsibility takes seriously that the assignment of responsibility is an act by specific actors, which affects others, and mirrors the basic social configuration of assignment. Attribution of responsibilities consider the ability of actors to influence actions and decisions in the relevant field, regarding issues of accountability, power, and legitimation. Relevant questions for responsibility analysis are: How are the capabilities, influence, and power to act and decide distributed in the field considered (Sect. 14.4)? Which social groups, including scientists, engineers, managers, citizens, and stakeholders, are affected and could or should help deliberate and decide about the distribution of responsibility? Should taking accountability and responsibility seriously be ensured by regulation, or can it be delegated to particular groups such as engineers (Sect. 14.6)? What consequences would a particular distribution of responsibility have for the governance of the respective field, and would it be in favour of the desired developments? • The ethical dimension of responsibility (cp. also Grinbaum and Groves 2013; Gianni 2016; Ruggiu 2018) is reached when the question is posed regarding the criteria and rules for judging actions and decisions under consideration as responsible, irresponsible, or more or less responsible. Criteria are also needed to discover how actions and decisions could be designed to be (more) responsible. Relevant questions about responsibility reflections are: What criteria distinguish between responsible and irresponsible actions and decisions? Is there consensus or controversy on these criteria among the relevant actors? Which meaning of normative issues such as ‘dignity of life’ or ‘animal welfare’ is presupposed, and by whom? Can the actions and decisions in question (e.g., about the scientific agenda or containment measures to prevent risks) be regarded as responsible with respect to the rules and criteria? • The epistemic dimension asks about the knowledge of the subject of responsibility and its epistemological status and quality. This is relevant in particular in debates on responsibilities in the field of emerging technologies because frequently, statements about impacts and consequences of science and new technology show a high degree of uncertainty. The comment that nothing else comes from “mere possibility arguments” (Hansson 2006) is an indication that in debates over responsibility it is essential that the status of the available knowledge about the futures to be accounted for is determined and critically reflected from an epistemological point of view (Nordmann 2007; Grunwald 2017,

248

A. Grunwald

2019a). Relevant questions in this respect are: What is known about prospective objects of responsibility? What could be known in the case of more research, and which uncertainties are pertinent? How can different uncertainties be qualified and compared to each other? What is at stake if worse comes to worst? Debates over responsibility in technology and science often focus on the ethical dimension, while social issues of assignment processes and epistemic constraints are regarded as secondary issues. Responsibility debates then cover only parts of the picture. To apply a comprehensive view, however, all of these three dimensions must be considered together in prospective debates involving emerging technology.

14.3  T  he Empirical Dimension of Responsibility in Engineering According to many overwhelmingly convergent diagnoses, there is an obvious need for enhanced responsibility in shaping, promoting, or regulating new science and technology and in many application fields using emerging technologies. Any “trial and error” approach with regard to emerging technology would miss by far any reasonable expectations with respect to good governance, responsibility, and ethics, independent of what these mean in detail (already Rip 1986). The assumption that unintended and unexpected effects, in case of their occurrence, could be repaired ex post has been proven false in many cases (e.g., Harremoes et al. 2002). Naïve hopes that accelerated technological advance could solve all the current problems are no longer tenable in the Anthropocene (Grunwald 2018). Responsibility assignments in decision-making processes involving emerging technologies must have specific addressees with the capability to intervene at the level of action, according to the above-mentioned empirical dimension of responsibility: who is in charge and responsible for what? However, assigning responsibility is a challenge in the face of the complexity of technology governance involving contributions and influence from many and heterogeneous actors and groups with different opportunities for interventions and diverse interests and values. In the following, the question of what groups can impact the development and use of emerging technologies and products as well as services based upon them with what type of intervention will be in the foreground. The target is to distinguish different fields of responsibility according to different actor groups and different opportunities to intervene. In the ethics of technology, the following types of decisions related to different actor groups have been identified (Grunwald 2021): Political Decisions primarily address (1) public programs promoting research and technology, e.g., in materials science, on regenerative sources of energy, or in stem-cell research, and (2) determine boundary conditions such as environmental and safety standards or laws stipulating recycling in the form of binding regulation. Ethics of technology and responsibility considerations here manifest themselves as

14  The Responsibility of Researchers and Engineers: Codes of Ethics for Emerging…

249

policy advice to policymakers, e.g., provided by National Ethics Councils or Offices of Technology Assessment (Grunwald 2019b, Ch. 3.1). Entrepreneurial Decisions concern the development of technology in the economy at market conditions. The shaping of technology in enterprises is operationalized by means of requirement specifications, project plans, and strategic entrepreneurial decisions aiming at success in the market under competition. Ethics of technology and responsibility reflections manifest themselves as part of the business and entrepreneurial ethics, often putting the measures taken for granting economic success and their consequences into a broader space of social values. Engineering is closely related to the development, production, utilization, and disposal of technology. Engineers are closest to technology from their design over the ‘making of’ to stages of use, maintenance, and recycling. Reflection on the moral foundations of engineering is a major part of the ethics of technology. Ethics of technology here manifests itself as engineering ethics (Sect. 14.4) with codes of ethics (Sect. 14.5) as its essential part. Consumer and user behavior influences the success of technology and innovation as well as their development over time. Consumer and user – including institutional users such as authorities, companies, or hospitals – have preferences including moral backgrounds, institutional missions, and social as well as individual values. For example, for purchasing an automobile, criteria such as sportiness, cost, status, and environmental compatibility play a role that varies from person to person. Ethics of technology and responsibility explorations manifest themselves as strengthening the awareness of consumers of ethical issues and their willingness to consume responsibly. This is only a very rough picture of actor groups intervening in the development and use of emerging technologies at different stages in technology governance, with different interests and different opportunities for intervention. Its advantage is that some more general ideas on responsibility can already be generated at this level. Recalling the five-placed reconstruction of responsibility (Sect. 14.2), it demonstrates that responsibility is shared among these different actors, according to the subject of responsibility. Its object, the consequences of the respective decisions made by those actors, also differ with respect to range of influence over time, over markets, over product areas, etc. Consequently, there is a difference in the depth of intervention into overall technology governance. Differentiation can also be seen with regard to respective authorities, to which responsibility is directed. These could be democratic institutions or legal entities in public affairs, or shareholders but also customers in the economy. Determining the instance for the responsibility of engineers and researchers is complicated. It could include the community, or the entire society, or something between (Sect. 14.4). For users and consumers, this instance often is opaque, depending on the context. These considerations first demonstrate that engineers and researchers are not alone with responsibility for technological advancement and in particular emerging technologies (Chap. 3 in this volume). This observation provides an argument to prevent over-burdening them in ethical respect. Second, it allows conceptualizing codes of ethics as part of the ethics of professions involved, e.g., engineers and

250

A. Grunwald

researchers but also regulators, funding agencies, entrepreneurs, managers, and customers. The responsibility of engineers and researchers to advance emerging technologies, for their dissemination and resulting consequences, is crucial but only part of the game.

14.4  T  he Specific Responsibility of Engineers and Researchers Some engineers and philosophers assume that technology development and technological advancement are by far, or even completely, dominated by engineers and researchers. This perception implies ignoring possible contributions from other societal groups like businesspeople and policymakers mentioned above. In this strand of argumentation, authors conclude that engineers are the most important professional group for influencing not only technology but the entire future of society and even humankind because they regard science and technology as major driving forces (Alpern 1983). Engineering ethics and responsibility are then seen as the best and perhaps only instruments to keep technology and, consequently, humankind on a good track and avoid negative impact. In a radical form, few authors postulate that engineers should even be moral heroes (Alpern 1983). In this perception, engineers are burdened with heavy responsibility, assuming that, if each engineer were to grasp all the consequences of his own actions, assess them responsibly and act accordingly, any negative and unintentional consequences of technology could be largely or completely avoided. Consequently, engineers should always analyze possible consequences of their actions and decisions, perform a comprehensive technology assessment (Grunwald 2019b) and perform ethical reflection parallel to their engineering work. This approach, however, misses the complexity of technology governance suggested above by indicating several involved societal groups and systems (e.g., Bijker et  al. 1987). The preceding consideration shows that the entire responsibility for technological advance and the use of its outcomes is shared in a practical concert of engineers and researchers together with other powerful groups. This observation motivates the question for the more specific role of engineers and researchers for developing (van Gorp and Grunwald 2009) and using emerging technologies, e.g., in computer and data sciences (Chap. 2 in this volume). This question addresses (1) specific properties and opportunities for intervention specific to these groups, and (2) possible limitations to their responsibility, according to the EEE model of responsibility (Sect. 14.2). 1. A practical and realistic approach to the specific responsibility of engineers and researchers should find its point of departure in the specific and essential knowledge and skills of these groups, according to their roles in the entire processes of science, technology, and innovation governance. Engineers and researchers are the groups closest to science, technology, and development through all the stages

14  The Responsibility of Researchers and Engineers: Codes of Ethics for Emerging…

251

of the genesis of emerging technologies. In particular, they have the best knowledge about technological issues and potentials at the stage of design (van de Poel 2009), and in the visionary phases prior to design and development. Engineers and researchers also have the best knowledge in later stages, i.e., on performance indicators, emission characteristics, possible technical risks involved, data on materials and energy needed for using that technology, and opportunities and obstacles for recycling parts after disposal. This specific knowledge cannot be replaced by any other of the groups involved in the governance and assessment of technology. This exactly gives rise to assigning specific responsibilities to them in at least two directions. First, their responsibility is to address scientific integrity, deliver high quality, take care of risk issues, act according to the state of the art, etc. – in short: doing a good job at their place in the development and governance of technology. Second, the point of departure mentioned also gives rise to postulating responsibilities beyond the workplace. This type of responsibility more often leads to controversy and debate. It includes taking care of common good issues, observing potentials for misuse or abuse, observing possible cases of misconduct, informing ombudspersons or even the public in case of specific concern, raising their voice in public debates on emerging technologies based on their specific knowledge, engaging in interdisciplinary cooperation, etc. In short, both types of responsibilities – doing a good job while also looking beyond – are mirrored in many codes of ethics in engineering and scientific communities (e.g., computer scientists) and institutions (e.g., universities). 2. The specific place of engineers and researchers in the innovation system and in the development of technology also involves limitations and restrictions for taking responsibility, which must be considered to avoid over-burdening them. Already a quick look leads to some evident limitations of the roles and opportunities of engineers and researchers to take full responsibility (Grunwald 2001): • Lack of knowledge about consequences: Consequences of technology are not only direct consequences of technology. Instead, they emerge as a combination of technical issues (e.g., emission behavior of a device) and the use of that technology, its dissemination, and its impact on social dynamics, lifestyle, values, etc. Engineers and researchers are experts on engineering and specific fields of research but neither of innovation dynamics nor social dynamics nor of other social issues of the co-evolution of technology and society. They can raise some suspicion about unintended side effects but not scrutinize or approve them. Instead, they should be investigated by researchers from fields dedicated to the task of anticipating such socio-technical consequences, e.g., from technology assessment (Grunwald 2019b). • Lack of ethical expertise: Engineers and researchers usually have little knowledge about ethical patterns of argumentation. Thus, the expectation that they could perform a comprehensive ethical and responsibility assessment would be unfulfillable in many cases, regarding complex trade-offs or incommensurable argumentations pro and con under uncertainty (Hansson 2006). They can build some hypotheses but then should cooperate with professional ethics.

252

A. Grunwald

• Lack of mandate and legitimization: Engineers and researchers per se are not mandated to decide on societally relevant paths of technology, e.g., for care robots, assistance technologies, or in the Energiewende. This is task of the democratically legitimated bodies in society. Delegating far-ranging and value-laden decisions to engineers and researchers would open the door to a specific kind of technocracy with engineers possibly determining genuine political issues. Here, instead, cooperation with political bodies, the public, or regulators would be helpful. Combining issues (1) and (2) from above, some generic types of responsibility of engineers and researchers can easily be determined: • in an ‘early warning’ function (Grunwald 2001), engineers and researchers can and should act as detectors of potential problems in specific technological developments, based on their early and in-depth knowledge of emerging technologies. Due to the restrictions mentioned above, they will not be able to solve possible problems on their own or mandated to do so. • as preconditions for problem-solving; engineers and researchers should inform other actors in case of concerns, for example, ethics institutes. This would open up opportunities for initiating processes of ethical reflection or technology assessment. • for closer investigations of concerns; researchers and engineers should cooperate with experts or institutes from other disciplines and provide their specific knowledge and skills for these investigations • this also holds for searching for solutions if the existence of serious concern was identified by the investigation • they should engage in public debates around the issue under consideration, based on their specific knowledge. Assigning responsibility to engineers and researchers in this manner motivates questions for the conditions to be fulfilled to bear this responsibility and to act accordingly because, otherwise, the assignment would remain merely appellative without ‘making a difference’ (Sect. 14.1) in practice. Going in this direction allows deriving indirect responsibilities for engineers and researchers: responsibilities to build capacities, skills and competence to exert the direct responsibilities mentioned above. First, researchers and engineers need the competence to realize the early warning expectations. This needs sensitivity and knowledge to be able to identify possible concerns, i.e., awareness of possible adverse ethical issues. In order to make this awareness tangible, a distinction between standard and non-standard situations in ethical respect was proposed (Grunwald 2012 based on Grunwald 2001), which allowed offering some criteria for making this distinction operable (cp. van Gorp 2005 for an application). The realization of this indirect responsibility, hence, needs a minimum of ethical education. The second area of required competence and skills results from the necessity of informing other actors in case of concern and eventually cooperating with them for

14  The Responsibility of Researchers and Engineers: Codes of Ethics for Emerging…

253

interdisciplinary investigation. To this end, engineers and researchers need appropriate communication skills for interdisciplinary exchange. Building capacity in this respect has, e.g., consequences for the university education of engineers and researchers. Furthermore, more indirect responsibilities can be determined: responsibility for the engagement of researchers and engineers in their respective communities, willingness for capacity building, transfer of knowledge and experience with taking responsibility to their communities, representing them to other communities, and so forth.

14.5  Codes of Ethics Codes of ethics are a means to make the responsibility of engineers and researchers mentioned above operable (Davis 2019; cp. Sect. 3.2 in this volume). They go much more in-depth and make responsibilities more specific, compared to the general considerations given in the preceding section (cp. Chap. 6 in this volume presenting a case study on clinical situations). They usually distinguish between several cases of activities in their communities and professions with differentiated responsibilities and take the specifics of the respective community, institution, or profession into account. This holds for the empirical dimension of responsibility, but also in ethical and epistemological respect, depending on the specific configuration. The chapters included in this volume provide several cases showing how this can be done. Codes of ethics are needed as support and assistance but also as orientation to empower engineers, scientists, physicians, etc. with respect to responsibility, in order to enable them to act in a responsible manner while doing their job in their profession and community. They are not strict and binding regulations determining what has to be done or what must not be done (Chap. 3 in this volume). Instead, codes of ethics, aim to take their addressees seriously as responsible persons (homo responsibilis). They offer orientation in case of moral conflict, notions, and terms for expressing concerns, capabilities for approaching professional ethicists, stakeholders, citizens, patients, the public, etc. This orientation does not determine specific action in specific cases but should help engineers and researchers to make their decisions in an ethically better-reflected manner. While thereby burdening the addressees with expectations, codes of ethics also can relieve engineers and researchers from too high expectations and unbearable responsibility. Therefore, they must not include unfulfillable norms and principles, e.g., close to the ‘moral hero’ issue mentioned above. Instead, they should also regard the profession and community addressed in context and concert with other cooperating communities and professions, e.g., by hints for shared responsibilities and the need for cooperation and exchange in certain cases. According to the observations presented in the preceding section, internal and external targets with respect to the community under consideration can be

254

A. Grunwald

distinguished (cp. Chap. 5 in this volume for criticisms of a too narrow understanding of science integrity). Internal targets are related to safeguarding quality standards of the respective profession, ensuring science integrity by sticking to standards of good scientific practice, postulating support for the development of the respective community, and so forth. These parts of codes of ethics take a good development of the community addressed as a primary goal. External targets consider more general issues, such as contributing to the common good, observing and supporting social values, taking care of democratic and inclusive ideals, engaging for sustainable development, and so forth. They regard engineers and researchers not as isolated or autonomous groups or professions optimizing themselves but as an integral part of society with resulting responsibilities. However, these parts of codes of ethics are often not that specific, more appellative, and more difficult to trace with respect to ‘making a difference’. The Association of German Engineers (VDI, Verein Deutscher Ingenieure) approved an early code of conduct emphasizing the external dimension by its “Guide to Technology Assessment According to Individual and Social Ethical Aspects” (guideline no. 3780, VDI 1991). According to this guide, engineers should perform their daily business by ensuring everything belonging to good engineering practice (internal target) and societally acknowledged values (external). The VDI identified eight central values composing internal and external elements: functional reliability, economic efficiency, prosperity, safety, health, environmental quality, personality development, and social quality. By observing these values, engineers should, based on their knowledge and abilities, influence the development of technology in the morally right direction. If this would exceed their mandate or competence, they should cooperate with other actors.

14.6  Codes of Ethics in Context Codes of ethics do not stand alone but are part of larger environments of orientations and regulations, customs and tacit rules, laws, and guidelines, which form systems of distributed responsibilities. In this final section, two contexts shall be mentioned briefly: regulation (Sect. 14.6.1) and globalization (Sect. 14.6.2).

14.6.1  Codes of Ethics Versus Regulation In many theories of democracy, laws, and regulation are major means for implementing facets of the ‘common good’, which have to be decided by legitimated democratic institutions. Codes of ethics give orientation below, responsibility beyond the level of binding regulation (Chap. 3 in this volume). The space left open by regulation, e.g., to be filled by codes of ethics, is different in various governance traditions (Davis 2019). While governmental regulation has been given a strong role

14  The Responsibility of Researchers and Engineers: Codes of Ethics for Emerging…

255

in the (continental) European tradition, the US tradition tends to keep the status of binding regulation rather low, depending in detail on political positions, and to maximize individual freedom. A famous example is the Precautionary Principle included in the environmental legislation of the European Union (cp. Harremoes et al. 2002; Weckert and Moor 2007; Grunwald 2008). It is an attempt to make taking responsibility possible by implementing precautionary measures, even before valid knowledge about possible adverse effects is available in the epistemological dimension within the EEE approach (see above). A “reasonable concern” (von Schomberg 2005) suffices. However, the controversial discussion, in particular on the meaning of the “reasonable concern”, demonstrates the need for further clarification. Unless this clarification is reached  – if possible, at all  – often, particularly in the US, concerns are expressed that freedom of science and companies could be restricted too strongly, in the worst case based on mere suspicion. The critics prefer giving more responsibility, e.g., by codes of ethics, to engineers, managers, and scientists. During the 1980s and 1990s, the neoliberal wave in politics and economy, following so-called Reagonomics and Thatcherism, changed the relationship between state-driven regulation and economic freedom in favor of the latter almost worldwide. The at least perceived consequence of a decreased influence of democratic institutions and regulation created a “legitimacy vacuum” (Pellizzoni 2004) with respect to how to determine facets of the common good and to realize a fair, just, and sustainable society. Codes of ethics could act as gap-fillers replacing democratic decision-making followed by regulation. The vacuum should be filled by – not binding – responsibility considerations and subsequent actions of engineers and researchers. To put it simply: the weaker and leaner the state, the larger the possible role of codes of ethics. However, this approach could create a serious problem if taken too far: replacing democratic reasoning with decentralized reflections on values and responsibility would not be able to fill the “legitimacy vacuum” fully and adequately. Questions arise, first, on the relations of codes of ethics and the range of influence given to engineers and researchers by this type of ‘soft regulation’. In contrast to ‘strong regulation’, following codes of ethics depends on the willingness of the individuals regulated (Chap. 3 in this volume). Second, the issue of technocracy mentioned above must be observed in the situation of weakened democratic procedures (cp. Chap. 9 in this volume for a case from China). Either way, careful consideration is needed to find a good balance between state regulation and individual freedom oriented by codes of ethics in a soft manner. At which point this balance lies, may depend on cultural background and tradition.

256

A. Grunwald

14.6.2  Codes of Ethics in Global Context The proposal to demarcate the current era as the Anthropocene (Crutzen and Stoermer 2000) impressively illustrates the huge responsibility of humankind at the global level (e.g., Jonas 1979; Lenk 2007). A strong driving force of the Anthropocene is technology. We see the consequences of technology spreading at the global level and often causing damage in regions, which could not benefit from the technologies at the origin of the problems. Climate change and waste (e.g., micro-plastics) are two well-known issues. Also, global effects of technology matter, e.g., with respect to new divisions of labor enabled by the digitization, scarce natural resources, international value-added chains, global logistics, cloud-working across the planet, ethics dumping (Schroeder et al. 2018), loss of biodiversity, global issues of security concerning the use of new technologies, and so forth. Assessing technology at the global level is necessary for taking global responsibility (Hahn and Ladikas 2019). There is no global body observing the scientific and technological advancements in general and drawing conclusions for action. Currently, no global body is mandated to intervene in the technological advance and make use of its outcomes. This holds for science, research and development as well as for politics and for the economy. Hence, a global study, for example on digitization issues, would not only be confronted with immense complexity but also with the absence of any powerful and mandated advisee (Grunwald 2019b). Codes of ethics at the global level in and for increasingly global communities is a promising approach to improve responsibility at the global level and to foster intercultural exchange. It would be good to see ongoing effort in this direction, e.g., at the occasion of environmental issues (Chap. 3 in this volume; cp. also Chap. 7 for global ethics documents on AI). Effort towards more global codes of ethics would, however, be confronted with several challenges. Different interests between states and world regions, strong global competition in the economy, the absence of common standards, and various differences in the intercultural perception of emerging technologies could prevent consensus among global communities. The way ahead could be long and stony. But this way is worthwhile to be taken because of the needs of contributing to a good Anthropocene.

References Alpern, Kenneth D. 1983. Engineers as Moral Heroes. In Beyond Whistleblowing. Defining Engineers Responsibilities, ed. Vivian Weil, 40–51. Chicago: Center for the Study of Ethics in the Professions. Bijker, Wiebe, Thomas Hughes, and Trevor Pinch, eds. 1987. The Social Construction of Technological Systems. New Directions in the Sociology and History of Technological Systems. Cambridge, MA: MIT Press. Crutzen, Paul, and Eugene Stoermer. 2000. The Anthropocene. Global Change Newsletter 41: 17–18. Davis, Michael. 2019. Chapter 43: Professional Codes of Ethics. In International Handbook of Philosophy of Engineering, ed. Diane Michelfelder and Neelke Doorn. Abingdon: Routledge.

14  The Responsibility of Researchers and Engineers: Codes of Ethics for Emerging…

257

Fahlquist, Jessica. 2017. Responsibility Analysis. In The Ethics of Technology. Methods and Approaches, ed. Sven Ove Hansson, 129–143. London: Rowman & Littlefield. Friedman, Batya, Peter Kahn, and Alan Borning. 2006. Value Sensitive Design and Information Systems. In Human-Computer Interaction in Management Information Systems: Foundations, ed. P. Zhang and D. Galletta, 348–372. New York/London: M.E. Sharpe. Gianni, Robert. 2016. Responsibility and Freedom: The Ethical Realm of RRI. London: Wiley. Grinbaum, Alexei, and Chris Groves. 2013. What Is “Responsible” About Responsible Innovation? Understanding the Ethical Issues. In Responsible Innovation: Managing the Responsible Emergence of Science and Innovation in Society, ed. Richard Owen, John R.  Bessant, and Maggy Heintz, 119–142. London: Wiley. Grunwald, Armin. 2001. The Application of Ethics to Engineering and the Engineer’s Moral Responsibility: Perspectives for a Research Agenda. Science and Engineering Ethics 7: 415–428. ———. 2008. Nanoparticles: Risk Management and the Precautionary Principle. In Emerging Conceptual, Ethical and Policy Issues in Bionanotechnology, ed. Fabrice Jotterand, 85–102. Berlin et al: Springer. ———. 2012. Responsible Nanobiotechnology: Philosophy and Ethics. Singapore: Pan Stanford Publishing. ———. 2014a. Responsible Research and Innovation: An Emerging Issue in Research Policy Rooted in the Debate on Nanotechnology. In Responsibility in Nanotechnology Development, ed. Simone Arnaldi, A.  Arianna Ferrari, Peter Magaudda, and Florence Marin, 191–205. Dordrecht: Springer. ———. 2014b. Synthetic Biology as Technoscience and the EEE Concept of Responsibility. In Synthetic Biology: Character and Impact, ed. Bernd Giese, Christian Pade, Hans Wigger, and Arnim von Gleich, 249–265. Cham: Springer. ———. 2017. Assigning Meaning to NEST by Technology Futures: Extended Responsibility of Technology Assessment in RRI. Journal of Responsible Innovation 4: 100–117. https://doi. org/10.1080/23299460.2017.1360719. ———. 2018. Diverging Pathways to Overcoming the Environmental Crisis: A Critique of Eco-­ modernism from a Technology Assessment Perspective. Journal of Cleaner Production 197: 1854–1862. https://doi.org/10.1016/j.jclepro.2016.07.212. ———. 2019a. Responsibility Beyond Consequentialism. The EEE Approach to Responsibility in the Face of Epistemic Constraints. In Responsible Research and Innovation: From Concepts to Practices, ed. Robert Gianni, Jill Pearson, and Bernard Reber, 35–49. Milton Park: Routledge. ———. 2019b. Technology Assessment in Practice and Theory. Abingdon: Routledge. ———. 2021. Living Technology. Philosophy and Ethics at the Crossroads Between Life and Technology. Singapore: Jenny Stanford. Hahn, Julia, and Miltos Ladikas, eds. 2019. Constructing a Global Technology Assessment: Insights from Australia, China, Europe, Germany, India and Russia. Karlsruhe: KIT Scientific Publishing. https://doi.org/10.5445/KSP/1000085280. Hansson, Sven Ove. 2006. Great Uncertainty About Small Things. In Nanotechnology Challenges – Implications for Philosophy, Ethics and Society, ed. Joachim Schummer and Davis Baird, 315–325. Singapore et al: Springer. Harremoes, Poul, David Gee, Malcolm MacGarvin, A. Andy Stirling, Jane Keys, B. Brian Wynne, and Sofia Vaz, eds. 2002. The Precautionary Principle in the 20th Century. Late Lessons from Early Warnings. London: Sage. Jonas, Hans. 1979. Das Prinzip Verantwortung. Frankfurt: Suhrkamp. English translation: The Imperative of Responsibility. Chicago, 1984. Lenk, Hans. 1992. Zwischen Wissenschaft und Ethik. Frankfurt: Suhrkamp. ———, ed. 2007. Global TechnoScience and Responsibility. Berlin: LIT. Nordmann, Alfred. 2007. If and Then: A Critique of Speculative NanoEthics. NanoEthics 1: 31–46. Pellizzoni, Luigi. 2004. Responsibility and Environmental Governance. Environmental Politics 13: 541–565.

258

A. Grunwald

Radder, Hans. 2010. Why Technologies Are Inherently Normative. In Philosophy of Technology and Engineering Sciences, ed. Anthonie Meijers, vol. 9, 887–922. Amsterdam: Elsevier. Rip, Arie. 1986. Controversies as Informal Technology Assessment. Knowledge: Creation, Diffusion, Utilization 8: 349–371. Ruggiu, Daniele. 2018. Human Rights and Emerging Technologies: Analysis and Perspectives in Europe. Singapore: Pan Stanford Publishing. Schroeder, Doris, Julie Cook, Francois Hirsch, Solveig Fenet, and Vasantha Muthuswamy, eds. 2018. Ethics Dumping. Case Studies from North-South Research Collaborations. Berlin: Springer. Stahl, Bernd, Grace Eden, and Marina Jirotka. 2013. Responsible Research and Innovation in Information and Communication Technology: Identifying and Engaging with the Ethical Implications of ICTs. In Responsible Innovation: Managing the Responsible Emergence of Science and Innovation in Society, ed. Richard Owen, John Bessant, and Maggy Heintz, 199–218. Chichester: Wiley. Unger, Stephen E. 1994. Controlling Technology. Ethics and the Responsible Engineer. New York: Wiley. van de Poel, Ibo. 2009. Values in Engineering Design. In Philosophy of Technology and Engineering Sciences, ed. Anthonie Meijers, vol. 9, 973–1006. Amsterdam: Elsevier. van den Hoven, Jeroen, Pieter Vermaas, and Ibo van den Poel, eds. 2015. Handbook of Ethics, Values, and Technological Design: Sources, Theory, Values and Application Domains. Dordrecht: Springer. van Gorp, Anke. 2005. Ethical Issues in Engineering Design, Safety and Sustainability, Simon Stevin Series in the Philosophy of Technology. Delft: TU Delft. van Gorp, Anke, and Armin Grunwald. 2009. Ethical Responsibilities of Engineers in Design Processes: Risks, Regulative Frameworks and Societal Division Of Labour. In The Ethics of Technological Risk, ed. Lotte Asveld and Sabine Roeser, 252–268. London: Earthscan. VDI  – Verein Deutscher Ingenieure. 1991. Technology Assessment: Concepts and Foundation. Richtlinie 3780 Technikbewertung, Begriffe und Grundlagen. Düsseldorf: VDI. von Schomberg, Rene. 2005. The Precautionary Principle and Its Normative Challenges. In The Precautionary Principle and Public Policy Decision Making, ed. Erik Fisher, Jim Jones, and Rene von Schomberg, 141–165. Cheltenham: Edward Elgar. von Schomberg, Rene, and John Hankins, eds. 2019. International Handbook on Responsible Innovation. A Global Resource. Cheltenham: Edward Elgar. https://doi.org/10.433 7/9781784718862.00031. Weckert, John, and James Moor. 2007. The Precautionary Principle in Nanotechnology. In Nanoethics – The Ethical and Social Implications of Nanotechnology, ed. Fritz Allhoff, Paul Lin, James Moor, and John Weckert, 133–146. New Jersey: Wiley. Armin Grunwald Full professor of Philosophy and Ethics of Technology at Karlsruhe Institute of Technology (KIT), director of the Institute for Technology Assessment and Systems Analysis at KIT and director of the Office of Technology Assessment at the German Bundestag, Germany: [email protected]. His research interests include the theory of technology assessment, ethics of science and technology, theory of sustainable development, and the epistemology of interand transdisciplinary research.