Artificial Intelligence: Background, Risks and Policies 9798891134935

111 27 12MB

English Pages [280] Year 2024

Report DMCA / Copyright

DOWNLOAD FILE

Polecaj historie

Artificial Intelligence: Background, Risks and Policies
 9798891134935

Table of contents :
Contents
Preface
Chapter 1
Artificial Intelligence: Background, Selected Issues, and Policy Considerations(
Summary
Introduction
What Is AI?
AI Terminology
Algorithms and AI
Historical Context of AI
Waves of AI
Recent Growth in the Field of AI
AI Research and Development
Private and Public Funding
Selected Research and Focus Areas
Explainable AI
Data Access
AI Training with Small and Alternative Datasets
AI Hardware
Federal Activity in AI
Executive Branch
Executive Orders on AI
National Science and Technology Council Committees
Select AI Reports and Documents
Federal Agency Activities
Congress
Legislation
Hearings
Selected Issues for Congressional Consideration
Implications for the U.S. Workforce
Job Displacement and Skill Shifts
AI Expert Workforce
International Competition and Federal Investment in AI R&D
Standards Development
Ethics, Bias, Fairness, and Transparency
Types of Bias
Chapter 2
Trustworthy AI: Managing the Risks of Artificial Intelligence *
U.S. House of Representatives, Committee on Science, Space, and Technology, Subcommittee on Research and Technology, Hearing Charter, Trustworthy AI: Managing the Risks of Artificial Intelligence
Purpose
Witnesses
Overarching Questions
Background
AI Risks
Harmful Bias
Explainability and Interpretability
Safety
Cybersecurity and Privacy
Computational Costs
Government Action
OSTP
National Institute of Standards and Technology
National Science Foundation
International
Private Sector Action
Testimony of Ms. Elham Tabassi, Chief of staff, Information Technology Laboratory, National Institute of Standards and Technology
Testimony of Elham Tabassi, Chief of Staff, Information Technology Laboratory, National Institute of Standards and Technology, United States Department of Commerce, before the United States House of Representatives, Committee on Science, Space, and Te...
NIST’s Role in Artificial Intelligence
NIST AI Risk Management Framework
NIST’s Research on AI Trustworthiness Characteristics
AI Trustworthiness Characteristics – Fair and Bias is Managed
AI Trustworthiness Characteristics – Explainable and Interpretable
AI Trustworthiness Characteristics –Secure and Resilient
AI Trustworthiness Characteristics – Privacy-enhanced
Research on Applications of AI
AI Measurement and Evaluation
AI Standards
Interagency Coordination
Conclusion
Elham Tabassi (Fed), Chief of Staff, Information Technology Laboratory
Testimony of Dr. Charles Isbell, Dean and John P. Imlay, Jr. Chair of the College of Computing, Georgia Institute of Technology
Testimony of Mr. Jordan Crenshaw, Vice President of the Chamber Technology Engagement Center, U.S. Chamber of Commerce
Before the U.S. House Research And Technology Subcommittee, Hearing on “Trustworthy AI: Managing the Risks of Artificial Intelligence,” Testimony of Jordan Crenshaw, Vice President, C_TEC, U.S. Chamber of Commerce, September 29, 2022
Opportunities for the Federal Government and Industry to Work Together to Develop Trustworthy AI
Congress Needs to Pass a Preemptive National Data Privacy Law
Support for Alternative Regulatory Pathways Such as Voluntary Consensus Standards
Stakeholder Driven Engagement
Awareness of the Benefits of Artificial Intelligence
Awareness of the Benefits of Artificial Intelligence
How Are Different Sectors Adopting Governance Models and Other Strategies to Mitigate Risks that Arise from AI Systems?
How Should the United States Encourage More Organizations to Think Critically about Risks that Arise from AI Systems, Including by Priortiziing Trustworthy AI from the Earliest Stages of Development of New Systems?
What Recommendations Do You Have for how the Federal Government can Strengthen its Role for the Development and Responsible Deployment of Trustworthy AI Systems?
Conclusion
Testimony of Ms. Navrina Singh, Founder and Chief Executive Officer, Credo AI
Prepared Testimony of Navrina Singh, Founder and CEO, Credo AI, before the House Committee on Science, Space and Technology, Subcommittee on Research and Technology
Introduction
What Is Responsible AI?
How to Create an Environment that Fosters RAI
Companies Are Seeking Guidance
Key Challenges to Overcome in the Development and Use of Responsible AI
Context Is Critical: Metrics for Each Tenant of RAI Vary
Addressing Risk Now Ensures Leadership in the Long Run
Conclusion
Appendix I: Answers to Post-Hearing Questions
Appendix II: Additional Material for the Record
Engineered Intelligence: Creating a Successor Species, Congressman Brad Sherman, Statement for the Committee on Science, Space, & Technology, May 17, 2019
Chapter 3
Blueprint for an AI Bill of Rights: Making Automated Systems Work for the American People, October 2022*
Foreword
About This Framework
Listening to the American Public
Blueprint for an AI Bill of Rights
Safe and Effective Systems
You Should Be Protected from Unsafe or Ineffective Systems
Algorithmic Discrimination Protections
You Should Not Face Discrimination by Algorithms and Systems Should Be Used and Designed in an Equitable Way
Data Privacy
You Should Be Protected from Abusive Data Practices via Built-In Protections and You Should Have Agency over How Data About You Is Used
Notice and Explanation
You Should Know That an Automated System Is Being Used and Understand How and Why It Contributes to Outcomes That Impact You
Human Alternatives, Consideration, and Fallback
You Should Be Able to Opt out, Where Appropriate, and Have Access to a Person Who Can Quickly Consider and Remedy Problems You Encounter
Applying the Blueprint for an AI Bill of Rights
Rights, Opportunities, or Access
Relationship to Existing Law and Policy
Applying the Blueprint for an AI Bill of Rights
Relationship to Existing Law and Policy
Definitions
Algorithmic Discrimination
Automated System
Communities
Equity
Rights, Opportunities, or Access
Sensitive Data
Sensitive Domains
Surveillance Technology
Underserved Communities
From Principles to Practice: A Techincal Companion to the Blueprint for an AI Bill of Rights
Using This Technical Companion
Safe and Effective Systems
You Should Be Protected from Unsafe or Ineffective Systems
Why This Principle Is Important
What Should Be Expected of Automated Systems
Protect the Public from Harm in a Proactive and Ongoing Manner
Consultation
Testing
Risk Identification and Mitigation
Ongoing Monitoring
Clear Organizational Oversight
Avoid Inappropriate, Low-Quality, or Irrelevant Data Use and the Compounded Harm of Its Reuse
Relevant and High-Quality Data
Derived Data Sources Tracked and Reviewed Carefully
Data Reuse Limits in Sensitive Domains
Demonstrate the Safety and Effectiveness of the System
Independent Evaluation
Reporting
How These Principles Can Move into Practice
Executive Order 13960 on Promoting the Use of Trustworthy Artificial Intelligence in the Federal Government Requires That Certain Federal Agencies Adhere to Nine Principles When Designing, Developing, Acquiring, or Using AI for Purposes Other Than Nat...
The Law and Policy Landscape for Motor Vehicles Shows That Strong Safety Regulations—and Measures to Address Harms When They Occur—Can Enhance Innovation in the Context of Complex Technologies
From Large Companies to Start-Ups, Industry Is Providing Innovative Solutions That Allow Organizations to Mitigate Risks to the Safety and Efficacy of AI Systems, Both before Deployment and through Monitoring over Time
The Office of Management and Budget (OMB) Has Called for an Expansion of Opportunities for Meaningful Stakeholder Engagement in the Design of Programs and Services
The National Institute of Standards and Technology (NIST) Is Developing a Risk Management Framework to Better Manage Risks Posed to Individuals, Organizations, and Society by AI
Some U.S Government Agencies Have Developed Specific Frameworks for Ethical Use of AI Systems
The National Science Foundation (NSF) Funds Extensive Research to Help Foster the Development of Automated Systems That Adhere to and Advance Their Safety, Security and Effectiveness
Some State Legislatures Have Placed Strong Transparency and Validity Requirements on the Use of Pretrial Risk Assessments
Algorithmic Discrimination Protections
You Should Not Face Discrimination by Algorithms and Systems Should Be Used and Designed in an Equitable Way
Why This Principle Is Important
What Should Be Expected of Automated Systems
Protect the Public from Algorithmic Discrimination in a Proactive and Ongoing Manner
Proactive Assessment of Equity in Design
Representative and Robust Data
Guarding against Proxies
Ensuring Accessibility during Design, Development, and Deployment
Disparity Assessment
Disparity Mitigation
Ongoing Monitoring and Mitigation
Demonstrate That the System Protects against Algorithmic Discrimination
Independent Evaluation
Reporting
How These Principles Can Move into Practice
The Federal Government Is Working to Combat Discrimination in Mortgage Lending
The Equal Employment Opportunity Commission and the Department of Justice Have Clearly Laid out How Employers’ Use of AI and Other Automated Systems Can Result in Discrimination against Job Applicants and Employees with disabilities
Disparity Assessments Identified Harms to Black Patients' Healthcare Access
Large Employers Have Developed Best Practices to Scrutinize the Data and Models Used for Hiring
Standards Organizations Have Developed Guidelines to Incorporate Accessibility Criteria into Technology Design Processes
NIST Has Released Special Publication 1270, towards a Standard for Identifying and Managing Bias in Artificial Intelligence
Data Privacy
You Should Be Protected from Abusive Data Practices via Built-in Protections and You Should Have Agency over How Data About You Is Used
Why This Principle Is Important
What Should Be Expected of Automated Systems
Protect Privacy by Design and by Default
Privacy by Design and by Default
Data Collection and Use-Case Scope Limits
Risk Identification and Mitigation
Privacy-Preserving Security
Protect the Public from Unchecked Surveillance
Heightened Oversight of Surveillance
Limited and Proportionate Surveillance
Scope Limits on Surveillance to Protect Rights and Democratic Values
Provide the Public with Mechanisms for Appropriate and Meaningful Consent, Access, and Control over Their Data
Use-Specific Consent
Brief and Direct Consent Requests
Data Access and Correction
Consent Withdrawal and Data Deletion
Automated System Support
Demonstrate That Data Privacy and User Control Are Protected
Independent Evaluation
Reporting
Extra Protections for Data Related to Sensitive Domains
What Should Be Expected of Automated Systems
Provide Enhanced Protections for Data Related to Sensitive Domains
Necessary Functions Only
Ethical Review and Use Prohibitions
Data Quality
Limit Access to Sensitive Data and Derived Data
Reporting
How These Principles Can Move into Practice
The Privacy Act of 1974 Requires Privacy Protections for Personal Information in Federal Records Systems, Including Limits on Data Retention, and Also Provides Individuals a General Right to Access and Correct Their Data
NIST’s Privacy Framework Provides a Comprehensive, Detailed and Actionable Approach for Organizations to Manage Privacy Risks
A School Board’s Attempt to Surveil Public School Students—Undertaken without Adequate Community Input—Sparked a State-Wide Biometrics Moratorium
Federal Law Requires Employers, and Any Consultants They May Retain, to Report the Costs of Surveilling Employees in the Context of a Labor Dispute, Providing a Transparency Mechanism to Help Protect Worker Organizing
Privacy Choices on Smartphones Show That When Technologies Are Well Designed, Privacy and Data Agency Can Be Meaningful and Not Overwhelming
Notice and Explanation
You Should Know That an Automated System Is Being Used, and Understand How and Why It Contributes to Outcomes That Impact You
Why This Principle Is Important
What Should Be Expected of Automated Systems
Provide Clear, Timely, Understandable, and Accessible Notice of Use and Explanations
Generally Accessible Plain Language Documentation
Accountable
Timely and up-to-Date
Brief and Clear
Provide Explanations as to How and Why a Decision Was Made or an Action Was Taken by an Automated System
Tailored to the Purpose
Tailored to the Target of the Explanation
Tailored to the Level of Risk
Valid
Demonstrate Protections for Notice and Explanation
Reporting
How These Principles Can Move into Practice
Real-Life Examples of How These Principles Can Become Reality, Through Laws, Policies, and Practical Technical and Sociotechnical Approaches to Protecting Rights, Opportunities, and Access
People in Illinois Are Given Written Notice by the Private Sector if Their Biometric Information Is Used
Major Technology Companies Are Piloting New Ways to Communicate with the Public About Their Automated Technologies
Lenders Are Required by Federal Law to Notify Consumers About Certain Decisions Made About Them
A California Law Requires That Warehouse Employees Are Provided with Notice and Explanation About Quotas, Potentially Facilitated by Automated Systems, That Apply to Them
Across the Federal Government, Agencies Are Conducting and Supporting Research on Explainable AI Systems
Human Alternatives, Consideration, and Fallback
You Should Be Able to Opt out, Where Appropriate, and Have Access to a Person Who Can Quickly Consider and Remedy Problems You Encounter
Why This Principle Is Important
What Should Be Expected of Automated Systems
Provide a Mechanism to Conveniently Opt out from Automated Systems in Favor of a Human Alternative, Where Appropriate
Brief, Clear, Accessible Notice and Instructions
Human Alternatives Provided When Appropriate
Timely and Not Burdensome Human Alternative
Provide Timely Human Consideration and Remedy by a Fallback and Escalation System in the Event That an Automated System Fails, Produces Error, or You Would Like to Appeal or Contest Its Impacts on You
Proportionate
Accessible
Convenient
Equitable
Timely
Effective
Maintained
Institute Training, Assessment, and Oversight to Combat Automation Bias and Ensure any Human-Based Components of a System Are Effective
Training and Assessment
Oversight
Implement Additional Human Oversight and Safeguards for Automated Systems Related to Sensitive Domains
Narrowly Scoped Data and Inferences
Tailored to the Situation
Human Consideration before Any High-Risk Decision
Meaningful Access to Examine the System
Demonstrate Access to Human Alternatives, Consideration, and Fallback
Reporting
How These Principles Can Move into Practice
Healthcare “Navigators” Help People Find Their Way through Online Signup Forms to Choose and Obtain Healthcare
The Customer Service Industry Has Successfully Integrated Automated Services Such as Chat-Bots and AI-Driven Call Response Systems with Escalation to a Human Support Team
Ballot Curing Laws in at Least 24 States Require a Fallback System That Allows Voters to Correct Their Ballot and Have It Counted in the Case That a Voter Signature Matching Algorithm Incorrectly Flags Their Ballot as Invalid or There Is Another Issue...
Appendix
Examples of Automated Systems
Listening to the American People
Panel Discussions to Inform the Blueprint for an AI Bill of Rights
Summaries of Panel Discussions
Panel 1: Consumer Rights and Protections
Welcome
Moderator
Panelists
Panel 2: The Criminal Justice System
Welcome
Moderator
Panelists
Panel 3: Equal Opportunities and Civil Justice
Welcome
Moderator
Panelists
Panel 4: Artificial Intelligence and Democratic Values
Welcome
Moderator
Panelists
Panel 5: Social Welfare and Development
Welcome
Moderator
Panelists
Panel 6: The Healthcare System
Welcome
Moderator
Panelists
Index
Blank Page
Blank Page

Citation preview

Computer Science, Technology and Applications

No part of this digital document may be reproduced, stored in a retrieval system or transmitted in any form or by any means. The publisher has taken reasonable care in the preparation of this digital document, but makes no expressed or implied warranty of any kind and assumes no responsibility for any errors or omissions. No liability is assumed for incidental or consequential damages in connection with or arising out of information contained herein. This digital document is sold with the clear understanding that the publisher is not engaged in rendering legal, medical or any other professional services.

Computer Science, Technology and Applications Information and Knowledge Systems Manaswini Pradhan, PhD (Editor) Satchidananda Dehurl (Editor) 2023. ISBN: 979-8-89113-303-7 (Softcover) 2023. ISBN: 979-8-89113-390-7 (eBook) Emerging Applications of Blockchain Technology Vinod Kumar Shukla, PhD (Editor) Sonali Vyas, PhD (Editor) Shaurya Gupta, PhD (Editor) Suchi Dubey, PhD (Editor) 2023. ISBN: 979-8-89113-101-9 (Hardcover) 2023. ISBN: 979-8-89113-185-9 (eBook) Digital Transformation – Modernization and Optimization of Wireless Networks Ram Krishan, PhD (Editor) Manpreet Kaur, PhD (Editor) Jagtar Singh, PhD (Editor) Shilpa Mehta, PhD (Editor) Vikas Goyal (Editor) 2023. ISBN: 979-8-89113-042-5 (Softcover) 2023. ISBN: 979-8-89113-116-3 (eBook) Digital Twins: The Industry 4.0 Use Cases: The Technologies, Tools, Platforms and Application Kavita Saini, PhD (Editor) Pethuru Raj Chelliah, PhD (Editor) 2023. ISBN: 979-8-89113-057-9 (eBook)

More information about this series can be found at https://novapublishers.com/product-category/series/computer-sciencetechnology-and-applications/

Gary Dalton Editor

Artificial Intelligence Background, Risks and Policies

Copyright © 2024 by Nova Science Publishers, Inc. All rights reserved. No part of this book may be reproduced, stored in a retrieval system or transmitted in any form or by any means: electronic, electrostatic, magnetic, tape, mechanical photocopying, recording or otherwise without the written permission of the Publisher. We have partnered with Copyright Clearance Center to make it easy for you to obtain permissions to reuse content from this publication. Please visit copyright.com and search by Title, ISBN, or ISSN. For further questions about using the service on copyright.com, please contact:

Phone: +1-(978) 750-8400

Copyright Clearance Center Fax: +1-(978) 750-4470

E-mail: [email protected]

NOTICE TO THE READER The Publisher has taken reasonable care in the preparation of this book but makes no expressed or implied warranty of any kind and assumes no responsibility for any errors or omissions. No liability is assumed for incidental or consequential damages in connection with or arising out of information contained in this book. The Publisher shall not be liable for any special, consequential, or exemplary damages resulting, in whole or in part, from the readers’ use of, or reliance upon, this material. Any parts of this book based on government reports are so indicated and copyright is claimed for those parts to the extent applicable to compilations of such works. Independent verification should be sought for any data, advice or recommendations contained in this book. In addition, no responsibility is assumed by the Publisher for any injury and/or damage to persons or property arising from any methods, products, instructions, ideas or otherwise contained in this publication. This publication is designed to provide accurate and authoritative information with regards to the subject matter covered herein. It is sold with the clear understanding that the Publisher is not engaged in rendering legal or any other professional services. If legal or any other expert assistance is required, the services of a competent person should be sought. FROM A DECLARATION OF PARTICIPANTS JOINTLY ADOPTED BY A COMMITTEE OF THE AMERICAN BAR ASSOCIATION AND A COMMITTEE OF PUBLISHERS.

Library of Congress Cataloging-in-Publication Data ISBN:  H%RRN

Published by Nova Science Publishers, Inc. † New York

Contents

Preface

.......................................................................................... vii

Chapter 1

Artificial Intelligence: Background, Selected Issues, and Policy Considerations.................................................1 Laurie A. Harris

Chapter 2

Trustworthy AI: Managing the Risks of Artificial Intelligence .......................................................65 Committee on Science, Space, and Technology

Chapter 3

Blueprint for an AI Bill of Rights: Making Automated Systems Work for the American People, October 2022 ...................................173 White House Office of Science and Technology Policy (OSTP)

Index

.........................................................................................265

Preface

The field of artificial intelligence (AI) has gone through multiple waves of advancement over the decades. Today, AI can broadly be thought of as computerized systems that work and react in ways commonly thought to require intelligence, such as the ability to learn, solve problems, and achieve goals under uncertain and varying conditions. The field encompasses a range of methodologies and application areas, including machine learning (ML), natural language processing, and robotics. AI holds potential benefits and opportunities, but also challenges and pitfalls. For example, AI technologies can accelerate and provide insights into data processing; augment human decision-making; optimize performance for complex tasks and systems; and improve safety for people in dangerous occupations. On the other hand, AI systems may perpetuate or amplify bias, may not yet be fully able to explain their decision-making, and often depend on vast datasets that are not widely accessible to facilitate research and development (R&D). Further, stakeholders have questioned the adequacy of human capital in both the public and private sectors to develop and work with AI, as well as the adequacy of current laws and regulations for dealing with societal and ethical issues that may arise. Together, such challenges can lead to an inability to fully assess and understand the operations and outputs of AI systems.

Chapter 1

Artificial Intelligence: Background, Selected Issues, and Policy Considerations Laurie A. Harris Summary The field of artificial intelligence (AI)—a term first used in the 1950s— has gone through multiple waves of advancement over the subsequent decades. Today, AI can broadly be thought of as computerized systems that work and react in ways commonly thought to require intelligence, such as the ability to learn, solve problems, and achieve goals under uncertain and varying conditions. The field encompasses a range of methodologies and application areas, including machine learning (ML), natural language processing, and robotics. In the past decade or so, increased computing power, the accumulation of big data, and advances in AI techniques have led to rapid growth in AI research and applications. Given these developments and the increasing application of AI technologies across economic sectors, stakeholders from academia, industry, and civil society have called for the federal government to become more knowledgeable about AI technologies and more proactive in considering public policies around their use. Federal activity addressing AI accelerated during the 115th and 116th Congresses. President Donald Trump issued two executive orders, establishing the American AI Initiative (E.O. 13859) and promoting the use of trustworthy AI in the federal government (E.O. 13960). Federal committees, working groups, and other entities have been formed to 

This is an edited, reformatted and augmented version of Congressional Research Service Publication No. R46795, dated May 19, 2021.

In: Artificial Intelligence Editor: Gary Dalton ISBN: 979-8-89113-493-5 © 2024 Nova Science Publishers, Inc.

2

Laurie A. Harris coordinate agency activities, help set priorities, and produce national strategic plans and reports, including an updated National AI Research and Development Strategic Plan and a Plan for Federal Engagement in Developing Technical Standards and Related Tools in AI. In Congress, committees held numerous hearings, and Members introduced a wide variety of legislation to address federal AI investments and their coordination; AI-related issues such as algorithmic bias and workforce impacts; and AI technologies such as facial recognition and deepfakes. At least four laws enacted in the 116th Congress focused on AI or included AI-focused provisions. 







The National Defense Authorization Act for FY2021 (P.L. 116283) included provisions addressing various defense- and security-related AI activities, as well as the expansive National Artificial Intelligence Initiative Act of 2020 (Division E). The Consolidated Appropriations Act, 2021 (P.L. 116-260) included the AI in Government Act of 2020 (Division U, Title I), which directed the General Services Administration to create an AI Center of Excellence to facilitate the adoption of AI technologies in the federal government. The Identifying Outputs of Generative Adversarial Networks (IOGAN) Act (P.L. 116-258) supported research on Generative Adversarial Networks (GANs), the primary technology used to create deepfakes. P.L. 116-94 established a financial program related to exports in AI among other areas.

AI holds potential benefits and opportunities, but also challenges and pitfalls. For example, AI technologies can accelerate and provide insights into data processing; augment human decisionmaking; optimize performance for complex tasks and systems; and improve safety for people in dangerous occupations. On the other hand, AI systems may perpetuate or amplify bias, may not yet be fully able to explain their decisionmaking, and often depend on vast datasets that are not widely accessible to facilitate research and development (R&D). Further, stakeholders have questioned the adequacy of human capital in both the public and private sectors to develop and work with AI, as well as the adequacy of current laws and regulations for dealing with societal and ethical issues that may arise. Together, such challenges can lead to an inability to fully assess and understand the operations and outputs of AI systems—sometimes referred to as the “black box” problem. Because of these questions and concerns, some stakeholders have advocated for slowing the pace of AI development and use until more research, policymaking, and preparation can occur. Others have

Artificial Intelligence

3

countered that AI will make lives safer, healthier, and more productive, so the federal government should not attempt to slow it, but rather should give broad support to AI technologies and increase federal AI funding. In response to this debate, Congress has begun discussing issues such as the trustworthiness, potential bias, and ethical uses of AI; possible disruptive impacts to the U.S. workforce; the adequacy of U.S. expertise and training in AI; domestic and international efforts to set technological standards and testing benchmarks; and the level of U.S. federal investments in AI research and development and how that impacts U.S. international competitiveness. Congress is likely to continue grappling with these issues and working to craft policies that protect American citizens while maximizing U.S. innovation and leadership.

Introduction Artificial intelligence (AI)—a term first used in the 1950s—can broadly be thought of as computerized systems that work and react in ways commonly thought to require intelligence, such as the ability to learn, solve problems, and achieve goals under uncertain and varying conditions.1 In the past decade, increases in computing power, the availability of large-scale datasets (i.e., big data), and advances in the methodologies underlying AI, have led to rapid growth in the field. AI technologies currently show promise for improving the safety, quality, and efficiency of work and for promoting innovation and economic growth. At the same time, the application of AI to complex problem solving in real-world situations raises concerns about trustworthiness, bias, and ethics and potential disruptive effects on the U.S. workforce. In addition, numerous policy questions are at issue, including those concerning the appropriate U.S. approach to international competition in AI research and development (R&D), technological standard setting, and the development of testing benchmarks. Given the increasing use of AI technologies across economic sectors, stakeholders from academia, industry, and civil society have called for the federal government to become more knowledgeable about AI technologies and more proactive in considering public policies around their use. To assist Congress in its work on AI, this report provides an overview of AI technologies and their development, recent trends in AI, federal AI activity, and selected issues and policy considerations. 1

Adapted from Office of Science and Technology Policy, Preparing for the Future of Artificial Intelligence, October 2016, p. 6.

4

Laurie A. Harris

This report does not attempt to address all applications of AI. Information on the application of AI technologies in transportation, national security, and education can be found in separate CRS products.2

What Is AI? While there is no single, commonly agreed upon definition of AI, the National Institute of Standards and Technology (NIST) has described AI technologies and systems as comprising “software and/or hardware that can learn to solve complex problems, make predictions or undertake tasks that require humanlike sensing (such as vision, speech, and touch), perception, cognition, planning, learning, communication, or physical action.”3 Definitions may vary according to the discipline in which AI is being discussed.4 AI is often described as a field that encompasses a range of methodologies and application areas, such as machine learning (ML), natural language processing (NLP), and robotics. Defining AI is not merely an academic exercise, particularly when drafting legislation. AI research and applications are evolving rapidly. Thus, congressional consideration of whether to include a definition for AI in a bill, and if so how to define the term or related terms, necessarily include attention to the scope of the legislation and the current and future applicability of the definition. Considerations in crafting a definition for use in legislation include whether it is expansive enough not to hinder the future applicability of a law as AI develops and evolves, while being narrow enough to provide clarity on the entities the law affects. Some stakeholders, recognizing the many challenges of defining AI, have attempted to define principles that might help 2

3

4

See CRS Report R44940, Issues in Autonomous Vehicle Deployment, by Bill Canis; CRS In Focus IF10737, Autonomous and Semi-autonomous Trucks, by John Frittelli; CRS Report R45178, Artificial Intelligence and National Security, by Kelley M. Sayler; and CRS In Focus IF10937, Artificial Intelligence (AI) and Education, by Joyce J. Lu and Laurie A. Harris. National Institute of Standards and Technology, U.S. Leadership in AI: A Plan for Federal Engagement in Developing Technical Standards and Related Tools, August 9, 2019, pp. 78. See, for example, AI definitions in the categories of ordinary language, computational disciplines, engineering, economics and social sciences, ethics and philosophy, and international law and policy, in Sara Mattingly-Jordan et al., Ethically Aligned Design: First Edition Glossary, Institute of Electrical and Electronics Engineers (IEEE), January 2019, p. 8, at https://standards.ieee.org/content/dam/ieee-standards/standards/web/docu ments/other/ead1e_glossary.pdf.

Artificial Intelligence

5

guide policymakers. Research suggests that differences in definitions used to identify AI- related research may contribute to significantly different analyses and outcomes regarding AI competition, investments, technology transfer, and application forecasts.5 The John S. McCain National Defense Authorization Act for Fiscal Year 2019 (P.L. 115-232) included the first definition of AI in federal statute.6 Like those in other previously introduced bills, the definition incorporated a commonly cited framework of four possible goals that AI systems may pursue: systems that think like humans (e.g., neural networks), act like humans (e.g., natural language processing), think rationally (e.g., logic solvers), or act rationally (e.g., intelligent software agents embodied in robots).7 However, AI research and applications do not necessarily fall solely within any one of these four categories. In December 2020, the National Artificial Intelligence Act of 2020, enacted as part of the William M. (Mac) Thornberry National Defense Authorization Act (NDAA) for Fiscal Year 2021 (P.L. 116-283), included the following definition: The term “artificial intelligence” means a machine-based system that can, for a given set of human-defined objectives, make predictions, recommendations or decisions influencing real or virtual environments. Artificial intelligence systems use machine and human-based inputs to— (A) perceive real and virtual environments; (B) abstract such perceptions into models through analysis in an automated manner; and (C) use model inference to formulate options for information or action. 8

Current AI systems are considered to be narrow AI, meaning that they are tailored to particular, narrowly defined tasks. Example applications of AI in everyday life include email spam filtering, voice assistance (e.g., Siri, Alexa, Cortana), financial lending decisions, and search engine results. AI technologies are being integrated in a range of sectors, including transportation, health care, education, agriculture, manufacturing, and 5

Dewey Murdick, James Dunham, and Jennifer Melot, AI Definitions Affect Policymaking, Center for Security and Emerging Technology, June 2020, at https://cset.georgetown.edu/ wp-content/uploads/CSET-AI-Definitions-Affect- Policymaking.pdf. 6 P.L. 115-232, Section 238; 10 U.S.C. §2358 note. 7 Stuart Russell and Peter Norvig, Artificial Intelligence: A Modern Approach, 3rd ed. (Upper Saddle River, NJ: Prentice Hall, 2010), pp. 1-5. 8 P.L. 116-283 (hereinafter referred to as the FY2021 NDAA); H.R. 6395, Division E, Section 5002(3).

6

Laurie A. Harris

defense. Some AI experts use the terms augmented intelligence or humancentered AI to capture the various AI applications in physical and connected systems, such as robotics and the Internet of Things,9 and to emphasize the use of AI technologies to enhance human activities rather than to replace them. Most analysts believe that general AI, meaning systems that demonstrate intelligent behavior across a range of cognitive tasks, is unlikely to occur for a decade or longer. Some AI researchers believe that general AI can be achieved through incremental development and refining of current AI and machine learning tools, while others believe it will require the discovery and development of a new breakthrough technique. Just as there is debate over the definition of AI, there is debate over which technologies should be classified as AI. For example, robotic process automation (RPA) has been defined as “the use of software to automate highly repetitive, routine tasks normally performed by knowledge workers.”10 Because it automates activities performed by humans, it is often described as an AI technology. However, some argue that RPA is not AI because it does not include a learning component. Others discuss RPA as a basic tool that can be combined with AI to create complex process automation (CPA) or intelligent process automation (IPA), along an “intelligent automation continuum.”11

AI Terminology Some stakeholders, including industry, advocacy groups, and policymakers, have raised questions about whether specific AI technologies and techniques require tailored legislation. For example, legislation enacted in the 116th Congress focused on generative adversarial networks (GANs), described below, which are the main underlying AI technique used in generating deepfakes,12 which are most commonly described as realistic audio, video, and

9

For more information on the Internet of Things, see CRS In Focus IF11239, The Internet of Things (IoT): An Overview, by Patricia Moloney Figliola; and to identify additional CRS experts who work on IoT and related topics, see CRS Report R44225, The Internet of Things: CRS Experts, coordinated by Patricia Moloney Figliola. 10 See IBM, “Automate Repetitive Tasks,” at https://www.ibm.com/automation/rpa. 11 IBM Global Business Services, “Using Artificial Intelligence to Optimize the Value of Robotic Process Automation,” September 2017, at https://www.ibm.com/downloads/cas/KDK AAK29. 12 The Identifying Outputs of Generative Adversarial Networks (IOGAN) Act (P.L. 116-258).

Artificial Intelligence

7

other forgeries created using AI techniques.13 This section is meant to provide a broad understanding of a subset of common terms used in the field of AI and how they relate to one another. These include the subfield of machine learning (ML); ML techniques such as deep learning, neural networks, and GANs; and training methods such as supervised, unsupervised, and reinforcement learning. However, just as there are variations in how AI is defined, researchers and practitioners describe various AI-related terms in slightly different ways. Further, the following terms and techniques are not mutually exclusive; AI systems may employ more than one. For example, AlphaGo— the first AI program to beat a human master at the ancient Chinese game of Go—combined deep neural networks, supervised learning, and reinforcement learning.14 



13

Machine learning (ML), often referred to as a subfield of AI, examines how to build computer programs that automatically improve their performance at some task through experience without relying on explicit rules-based programming to do so.15 One of the goals of ML is to teach algorithms to successfully interpret data that have not previously been encountered. ML is one of the most common AI techniques in use today, and most ML tasks are narrowly specified to optimize specific functions using particular datasets. Deep learning, neural networks, and GANs represent a few of the ML techniques frequently used today. Deep learning (DL) systems learn from large amounts of data to subsequently recognize and classify related, but previously unobserved, data. For example, neural networks, often described as being loosely modeled after the human brain, consist of thousands or millions of processing nodes generally organized into layers. The strength of the connections among nodes and layers are repeatedly

For additional information on deepfakes, see CRS In Focus IF11333, Deep Fakes and National Security, by Kelley M. Sayler and Laurie A. Harris. 14 Richard S. Sutton and Andrew G. Barto, Reinforcement Learning: An Introduction, 2nd ed. (Cambridge, MA: MIT Press, 2018), pp. 441-442. 15 Adapted from Erik Brynjolfsson, Tom Mitchell, and Daniel Rock, “What Can Machines Learn, and What Does It Mean for Occupations and the Economy?,” AEA Papers and Proceedings, vol. 108 (May 1, 2018), pp. 43-47, at http://www-cgi.cs.cmu.edu/ ~tom/pubs/AEA2018-WhatCanMachinesLearn.pdf. ML is defined in P.L. 116-293 to mean “an application of artificial intelligence that is characterized by providing systems the ability to automatically learn and improve on the basis of data or experience, without being explicitly programmed.

8

Laurie A. Harris







tuned—based on characteristics of the training data—to correspond to the correct output. Advances in hardware, such as the development of graphical processing units (GPUs), have allowed these networks to have many layers, which is what puts the “deep” in deep learning. DL approaches have been used in systems across many areas of AI research, from autonomous vehicles to voice recognition technologies.16 Generative adversarial networks (GANs) consist of two competing neural networks—a generator network that tries to create fake outputs (such as pictures), and a discriminator network that tries to determine whether the outputs are real or fake. A major advantage of this structure is that GANs can learn from less data than other deep learning algorithms.17 Adversarial ML systems can be used in other ways, as well; for example, they can try to improve fairness in financial service decisionmaking by having a second model try to guess the protected class of applicants based on models built by another model.18 Supervised learning algorithms learn from a training set of data that is labeled with the correct description (e.g., the correct label for this picture is “cat”); the system subsequently learns which components of the data are useful for classifying it correctly and uses that information to correctly classify data it has never encountered before. In contrast, unsupervised learning algorithms search for underlying structures in unlabeled data. Reinforcement learning (RL) refers to giving computer programs the ability to learn from experience, providing them with minimal inputs and human interventions.19 RL algorithms learn by trial and error, being rewarded for reaching specified objectives—both for immediate actions and long-term goals. The emphasis on simulated motivation and learning from direct interaction with the environment,

Larry Hardesty, “Explained: Neural Networks,” Massachusetts Institute of Technology (MIT) News, April 14, 2017, at http://news.mit.edu/2017/explained-neural-networks-deeplearning-0414. 17 Jamie Beckett, “What’s a Generative Adversarial Network? Leading Researcher Explains,” NVIDIA, May 17, 2017, at https://blogs.nvidia.com/blog/2017/05/17/generativeadversarial-network/. 18 Sally Ward-Foxton, “Reducing Bias in AI Models for Credit and Loan Decision,” EE Times, April 30, 2019, at https://www.eetimes.com/reducing-bias-in-ai-models-for-credit-and-loandecisions/#. 19 Sean Garrish, How Smart Machines Think (Cambridge, MA: MIT Press, 2018), p. 91. 16

Artificial Intelligence

9

without requiring explicit examples and models, are among the characteristics that set RL apart from other ML approaches.20

Algorithms and AI As interest in AI continues to grow, some analysts assert that general data analytics and specialized algorithms are increasingly being referred to, erroneously, as AI. It can be challenging to make such distinctions clearly, given the variability in definitions of AI and related terms and because these distinctions have arguably shifted over time. For example, an algorithm is basically a procedure or set of instructions designed to perform a specific task or solve a mathematical problem. Some early products of AI research, such as rule-based expert systems, are algorithms encoded with expert knowledge but lacking a learning component. Some feel that rule-based systems are a simple form of AI because they simulate intelligence, while others think that without a learning component a system should not be considered AI.21 Generally, however, the goals of AI—automating or replicating intelligent behavior— have remained consistent.22

Historical Context of AI The ideas underlying AI and its conceptual framework have been researched since at least the 1940s and initially formalized in the 1950s. Ideas about intelligent machines were discussed and popularized by scientists and authors such as Alan Turing and Isaac Asimov,23 and the term “artificial intelligence”

20

Adapted from Richard S. Sutton and Andrew G. Barto, Reinforcement Learning: An Introduction, 2nd ed. (Cambridge, MA: MIT Press, 2018). 21 For a brief discussion see, for example, Tricentis, “AI Approaches Compared: Rule-Based Testing vs. Learning,” at https://www.tricentis.com/artificial-intelligence-softwaretesting/ai-approaches-rule-based-testing-vs-learning/. 22 Office of Science and Technology Policy, Preparing for the Future of Artificial Intelligence, October 2016, pp. 5-6. 23 Alan M. Turing, “Computing Machinery and Intelligence,” Mind, vol. 49 (1950), pp. 433-460, at https://www.csee.umbc.edu/courses/471/papers/turing.pdf; and Isaac Asimov, I, Robot (Garden City, NY: Doubleday, 1950).

10

Laurie A. Harris

was coined at the Dartmouth Summer Research Project on Artificial Intelligence, proposed in 1955 and held the following year.24 Since that time, the field of AI has gone through what have been termed by some as summers and winters—periods of much research and advancement, followed by lulls in activity and progress. The reasons for the AI winters have included a focus on theory over practical applications, research problems being more difficult than anticipated, and limitations of the technologies of the time. Much of the current progress and research in AI, which began around 2010, has been attributed to the availability of big data, improved ML approaches and algorithms, and more powerful computers.25

Waves of AI The Defense Advanced Research Projects Agency (DARPA), which has funded AI R&D since the 1960s, has described the development of AI technologies in terms of three waves.26 These waves are described by the varying abilities of technologies in each to perceive rich, complex, and subtle information; to learn within an environment; to abstract to create new meanings; and to reason in order to plan and reach decisions.27 First wave: handcrafted knowledge. The first wave of AI technologies have abilities primarily to perceive and reason but no learning capability and poor handling of uncertainty. For such technologies, researchers and engineers create sets of rules to represent knowledge in well- defined domains for narrowly defined problems. The TurboTax software, an expert system, is one See J. McCarthy et al., “A Proposal for the Dartmouth Summer Research Project on Artificial Intelligence,” August 31, 1955, at http://www-formal.stanford.edu/jmc/history/dartmouth/ dartmouth.html. 25 Executive Office of the President, National Science and Technology Council, Committee on Technology, Preparing for the Future of Artificial Intelligence, October 2016, pp. 5-6.; for additional information on these factors and a short history of AI, see also the appendix of Peter Stone et al., “Artificial Intelligence and Life in 2030,” One Hundred Year Study on Artificial Intelligence: Report of the 2015-2016 Study Panel, Stanford University, Stanford, CA, September 2016, at http://ai100.stanford.edu/2016-report. 26 See “DARPA Announces $2 Billion Campaign to Develop Next Wave of AI Technologies,” September 7, 2018, at https://www.darpa.mil/news-events/2018-09-07. 27 Arati Prabhakar, former Director of DARPA, “Powerful but Limited: A DARPA Perspective on AI,” presentation at National Academies of Sciences, Engineering, and Medicine workshop, Robotics and Artificial Intelligence: Policy Implications for the Next Decade, December 12, 2016, at https://www.nationalacademies.org/event/12-12-2016/ roboticsand-artificial-intelligence-policy-implications-for-the-next-decade (hereinafter “Prabhakar, 2016”). 24

Artificial Intelligence

11

example. Rules are built into the application, which then turns input information into tax form outputs, but it has only a rudimentary ability to perceive and no ability to learn (e.g., about a new tax law) or to abstract beyond what it is programmed to know. Second wave: statistical learning. Starting in the 1990s, a second wave of AI technologies were developed with more nuanced abilities to perceive and learn, with some ability to abstract, minimal reasoning ability, but no contextual ability. For these systems, engineers create statistical models for specific problem domains and train them on big data. Generally, while such systems are statistically powerful, they can be individually unreliable, especially in the presence of skewed training data (e.g., a face recognition system trained on a limited range of skin tones can be powerful for similar faces, but highly unreliable for individuals outside of the training spectrum). As noted by DARPA, these technologies are “dependent on large amounts of high quality training data, do not adapt to changing conditions, offer limited performance guarantees, and are unable to provide users with explanations of their results.”28 Additional examples of second wave AI technologies include voice recognition and text analysis. Third wave: contextual adaptation. The third wave of AI technologies is oriented toward making it possible for machines to adapt to changing situations (i.e., contextual adaptation). Engineers create systems that construct explanatory models of real world phenomena, and “AI systems learn and reason as they encounter new tasks and situations.”29 Examples of third wave technologies would include explainable AI (XAI), as described below.

Recent Growth in the Field of AI There are many potential indicators of growth in the AI field. This section presents indicators of growth based on R&D activities and public and private investments in areas of frequent congressional interest. It also provides a brief discussion of AI hype versus the reality of what AI technologies are capable of today. It can be challenging to obtain comprehensive and directly comparable data for the indicators discussed in this section, particularly for AI investments. See “DARPA Announces $2 Billion Campaign to Develop Next Wave of AI Technologies,” September 7, 2108, at https://www.darpa.mil/news-events/2018-09-07. 29 Prabhakar, 2016. 28

12

Laurie A. Harris

Therefore, such data should be evaluated carefully and treated as only indicative of trends.

AI Research and Development One way to assess the growth in AI R&D is based on the publication of peerreviewed papers, including both conference papers and journal articles. According to the AI Index group, between 2000 and 2019, the total number of peer-reviewed AI publications in Elsevier’s Scopus database—the world’s largest abstract and citation database—grew nearly 12-fold.30 Authors based in the Europe Union (EU) published the most peer-reviewed AI publications as a percentage of the world total from 2000 to 2007 and again from 2012 to 2016, while authors based in China published the most from 2008 to 2011 and 2017 to 2019.31 In 2020, the papers published by authors in China surpassed those of authors in the United States in the share of AI journal citations in the world for the first time. However, over the past decade, authors in the United States have consistently had more cited AI conference papers than authors based in China.32 Further, the number of publications a researcher or country produces does not necessarily equate to scientific impact or research quality. As one researcher at the University of Oxford, UK, reportedly stated, “Just pumping out raw numbers of papers that don’t have a lasting impact isn’t really useful. It’s more important to keep up with the technology frontier.”33 Such evaluations, however, do not discuss the finer points of which studies included teams of researchers from more than one country, raising the question of how to neatly attribute papers to regions, organizations, or funding sources. In addition to published papers, many AI researchers in recent years have published preprint papers (submitted before peer review) to an online repository called arXiv (pronounced “archive”). As reported by the AI Index group, between 2015 and 2020, the total number of AI papers on arXiv increased over six-fold, with more growth in certain subcategories, providing

30

AI Index Steering Committee, The AI Index 2021 Annual Report, Human-Centered AI Institute, Stanford University, Stanford, CA, March 2021, p. 18 (hereinafter, “AI Index 2021”). The AI Index 2021 report authors provided this information only for the United States, China, and the Europe Union (EU), not individual countries within the EU. 31 Ibid., p. 20. 32 Ibid., p. 17. 33 Neil Savage, “The Race to the Top Among the World’s Leaders in Artificial Intelligence,” Nature Index, December 9, 2020, at https://www.nature.com/articles/d41586-020-03409-8.

Artificial Intelligence

13

a rough indication of areas of research activity across a range of AI subfields.34 As of 2020, the most common subcategories of preprint papers were ML and computer vision (Figure 1).35

Source: AI Index Steering Committee, The AI Index 2021 Annual Report, HumanCentered AI Institute, Stanford University, Stanford, CA, December 2021, p. 34. Notes: The arXiv is an online repository for pre-publication papers, which generally means they have not undergone prior peer review. The papers on arXiv listed here are grouped by field of study, including cs.CV (computer vision), cs.LG (machine learning in computer science), cs.CL (computation and language), cs.RO (robotics), cs.AI (artificial intelligence), stat.ML (machine learning in statistics), and cs.NE (neural and evolutionary computing). Figure 1. Total Number of AI-Related Publications on arXiv, by Field of Study, 20152020.

Groups like the AI Index have also attempted to measure progress in AI and its fields of study, though critics have categorized such efforts as reporting “trends in data that are related to AI” rather than tracking progress.36 Further, recent research has raised concerns about the accuracy of reported improvements. By some measures, such as training time and cost, areas such as image classification have improved substantially.37 By other measures, researchers assert that progress has come from tweaks, rather than core 34

Ibid., p. 32. Ibid., p. 34. 36 Jeffrey Funk and Gary Smith, “Stanford’s AI Index Report: How Much Is BS?,” Mind Matters News, March 3, 2020, at https://mindmatters.ai/2020/03/stanfords-ai-index-report-howmuch-is-bs/. 37 AI Index 2021, pp. 48-49. Image classification broadly refers to the assigning of identification labels to images. 35

14

Laurie A. Harris

innovations, and some purported progress might not have taken place. For example, some researchers using meta-analyses of algorithms in various fields and applications—such as pruning algorithms used to make neural networks more efficient and information retrieval programs used in search engines— have found no clear evidence of performance improvements over the 10-year period from 2010 to 2019.38

Private and Public Funding Since around 2015, private funding for AI has been increasing, both in the United States and globally. For example, according to the AI Index 2021 report, global corporate investment in AI—including private investment, public offerings, mergers and acquisitions, and minority stakes—increased from $12.8 billion raised in 2015 to over $67.8 billion in 2020.39 Global AI startup funding also increased steadily from 2015 to 2020, though the number of companies funded has decreased for each year from 2017 through 2020.40 The United States continues to lead the world in private AI investments, with $23.6 billion in funding in 2020, followed by China ($9.9 billion) and the European Union ($2.0 billion). The top area of private investment in AI in 2020 was “Drugs, Cancer, Molecular, Drug Discovery” with more than $13.8 billion, 4.5 times higher than in 2019.41 This increased funding for this particular area in 2020 may have been in large part a response to the Coronavirus Disease 2019 (COVID-19) pandemic; among the additional areas that also saw substantial increases in funding from 2019 to 2020 were “Students, Courses, Edtech, English language” and “Speech Recognition, Computer interaction, Dialogue, and Machine translation.”42 According to a McKinsey 2020 survey of over 1,000 company respondents, over half reported no change in AI investments amid the coronavirus pandemic, and 25% increased their investment in AI.43

Matthew Hutson, “Core Progress in AI Has Stalled in Some Fields,” Science, vol. 368, no. 6494 (May 29, 2020), p. 927, at https://science.sciencemag.org/content/368/6494/927. 39 AI Index 2021, p. 93. 40 Ibid., p. 94. 41 Ibid., p. 11. 42 Ibid., p. 97. 43 Tara Balakrishnan et al., The State of AI in 2020, McKinsey & Company, November 17, 2020, at https://www.mckinsey.com/Business-Functions/McKinsey-Analytics/Our-Insights/ Global-survey-The-state-of-AI-in- 2020. The survey and interviews with executives were 38

Artificial Intelligence

15

In FY2020, U.S. public funding for AI R&D was reported for the first time across non-defense federal agencies in a supplemental report to the President’s FY2020 budget, submitted by the Networking and Information Technology Research and Development (NITRD) Program. The annual NITRD supplemental report includes funding information across Program Component Areas (PCAs), which are major subject areas of federal IT R&D and may change each year. For FY2021, AI is included as a stand-alone PCA, and the report includes FY2019 actual investments, FY2020 enacted investments, and FY2021 requested funding amounts. While AI is a standalone PCA, some other PCAs have AI as a component.44 Total FY2021 requested funding for non-defense agency AI R&D under the AI PCA was $912 million (an increase from the FY2020 enacted and supplemental total amount of $660 million); for AI-related efforts reported in other PCAs, the request was $590 million (an increase from the FY2020 enacted and supplemental total amount of $466 million). Thus, the total requested federal FY2021 non- defense budget for AI across PCAs was $1.5 billion (an increase from the FY2020 enacted and supplemental total amount of $1.1 billion).45 By agency, the largest proportions of the FY2021 non-defense AI PCA request were from the National Science Foundation (NSF, $457 million), the U.S. Department of Agriculture (USDA, $128 million), and the Department of Energy (DOE, $84 million).46 Although defense agencies did not report AI funding numbers as part of the NITRD supplemental report, Bloomberg Government reported that the Department of Defense (DOD) FY2020 enacted budget for AI R&D was $5.0 billion, equal to the estimated FY2021 request.47 The FY2021 request estimate included $568 million at DARPA, $250 million for the Algorithmic Cross Functional Team (also known as “Project Maven”), and $132 million for the Joint Artificial Intelligence Center (JAIC).48 conducted from May to August, 2020, and included 1,151 respondents from organizations that had adopted AI in at least one function out of a total of 2,395 participants. 44 Examples of activities under the AI PCA include R&D that is primarily ML, and R&D focused on cybersecurity challenges unique to AI, and on computing architectures or chips optimized for neural networks. Examples of AI activities captured under other PCAs include R&D on robots that employ machine vision, R&D on the broad problem of humanmachine interaction, and general research on neuromorphic computing. Ibid., pp. 11-12. 45 Ibid., p. 11. 46 Ibid., pp. 8-9; DOE NNSA is listed separately from DOE. 47 As reported in AI Index 2021, p. 168. 48 Ibid. Project Maven was launched in April 2017 and charged with rapidly incorporating AI into existing DOD systems to demonstrate the technology’s potential; Robert Work, Former Deputy Secretary of Defense, Memorandum, “Establishment of an Algorithmic Warfare

16

Laurie A. Harris

Another measure of public investment in AI comes from data on government spending on AI contracts. According to analysis by Bloomberg Government using the Federal Procurement Data System (FPDS), in FY2018, U.S. federal agencies spent a total of $1.8 billion on unclassified AI- related contracts in FY2020, more than six times higher than the approximately $300 million spent in FY2015.49 DOD accounts for the vast majority of FY2020 AIrelated contract spending ($1.4 billion); after DOD, the National Aeronautics and Space Administration (NASA), the Department of Homeland Security (DHS), and the Department of Health and Human Services (HHS) have accounted for the largest share of spending on AI contracts among federal agencies since 2010.50 FPDS data may be helpful in identifying broad trends and producing rough estimates, but as other analysts have noted, these data may not be reliable and decisionmakers should understand its limitations and be cautious in using the data to develop policy or draw conclusions.51 Important considerations in evaluating any of these numbers, and especially in attempting to compare them to funding amounts reported by other countries, are the various potential discrepancies in the numbers by year, investment type, and reporting entity. The AI Index group has previously asserted that there is no consensus on standard labeling for AI related investment activities, no existing measurement and accounting standards for public investment or expenditures in AI, and no consistently available data comparing public investments across countries.52 AI hype and reality. The recent growth and advances in the field of AI have been impressive, and notable researchers have highlighted both the farreaching potential benefits, as well as the constraints and potential pitfalls of current and future AI technologies. Sergey Brin, co-founder of Google, has called the period of advancements over the past decade or so a “new spring in artificial intelligence,” stating that we are in a “technology renaissance” with monthly advances and “applications across nearly every segment of modern Cross-Functional Team (Project Maven),” April 26, 2017, at https://www.govexec.com/ media/ gbc/docs/pdfs_edit/establishment_of_the_awcft_project_maven.pdf. The JAIC is tasked with coordinating the efforts of DOD to develop, mature, and transition AI technologies into operational use, per P.L. 115-232, Section 2, Division A, Title X, §1051. Details and analysis for the FY2022 request are not yet available. 49 As reported in AI Index 2021, p. 169. 50 Ibid. 51 For additional discussions of FDPS data and how the FDPS system operates, see CRS Report R44010, Defense Acquisitions: How and Where DOD Spends Its Contracting Dollars, by John F. Sargent Jr. and Christopher T. Mann. 52 AI Index Steering Committee, The AI Index 2019 Annual Report, Human-Centered AI Institute, Stanford University, Stanford, CA, December, 2019, p. 98.

Artificial Intelligence

17

society,” while also highlighting potential concerns that accompany these advances (e.g., effects on employment, fairness, transparency, and safety).53 AI systems currently remain constrained to narrowly-defined tasks and can fail with small modifications to inputs. For example, deep learning systems that have excelled at recognizing facial images can be deceived by the introduction of simple image distortions, or “noise” in the data.54 The introduction of imperceptible or seemingly irrelevant changes to inputs, such as images, text, or sound waves, by malevolent actors has raised concerns about unforeseen vulnerabilities of AI, particularly in applications in autonomous vehicles, medical technologies, and defense systems. One expert noted, “While some people are worried about ‘superintelligent’ A.I., the most dangerous aspect of A.I. systems is that we will trust them too much and give them too much autonomy while not being fully aware of their limitations.”55 Many researchers agree that continued progress in AI requires the development and refinement of new techniques, in addition to increased availability of data and improvements in computing capacity.56

Selected Research and Focus Areas AI research currently spans a broad range of techniques and application areas. This section describes a selection of areas that have received attention in recent years and may be of particular interest to Congress, including an example of AI in healthcare; it is not meant to portray any area as more or less valuable than another to the overall progress of AI research. Some of these areas include explainable AI, data access and models that can learn from reduced amounts of data, and hardware to improve the speed of, and reduce the computing power required to run, AI algorithms.

Sergey Brin, “2017 Founders’ Letter,” at https://abc.xyz/investor/founders-letters/2017/ index.html. 54 Gaurav Goswami et al., “Unravelling Robustness of Deep Learning Based Face Recognition Against Adversarial Attacks,” Association for the Advancement of Artificial Intelligence, February 22, 2018, at https://arxiv.org/abs/ 1803.00401. 55 Melanie Mitchell, “Artificial Intelligence Hits the Barrier of Meaning,” New York Times, November 5, 2018, at https://www.nytimes.com/2018/11/05/opinion/artificial-intelligencemachine-learning.html. 56 Tom Simonite, “Your Instagram #Dogs and #Cats Are Training Facebook’s AI,” Wired, May 2, 2018, at https://www.wired.com/story/your-instagram-dogs-and-cats-are-trainingfacebooks-ai/. 53

18

Laurie A. Harris

Explainable AI As mentioned above in the discussion of third wave AI technologies, explainable AI has been an active area of research in recent years. As described by experts at DARPA, XAI research aims to create AI applications that can explain their actions and decisions to human users to improve trust and collaboration between humans and AI systems (Figure 2). Such explanations could help people identify and correct errors that AI systems make when generalizing from training data. This is of particular concern in high-stakes applications, such as classifying disease in medical images and classifying combatants and civilians in military surveillance images.57 Federal agencies and the White House have been working to define and guide federal development and use of understandable and explainable AI systems. In August 2020, NIST released a draft publication for public comment on “Four Principles of Explainable Artificial Intelligence” that presents principles, categories, and theories of XAI.58 In December 2020, Executive Order 13960 included, as a principle guiding the use of AI in federal government, that AI should be understandable, specifically that agencies shall “ensure that the operations and outcomes of their AI applications are sufficiently understandable by subject matter experts, users, and others.”59 Data Access The availability of big data to train AI models enabled major advances in the field over the last decade. For example, the ImageNet project, which contains over 14 million publicly available labeled images, held competitions from 2010 through 2017 that led to improvements in AI visual recognition performance.60 However, those developing AI technologies face barriers to using currently available datasets. In addition to the sheer amount of data 57

58

59

60

For a deeper discussion of XAI, see also Alejandro Barredo Arrieta et al., “Explainable Artificial Intelligence (XAI): Concepts, Taxonomies, Opportunities and Challenges Toward Responsible AI,” Information Fusion, vol. 58 (June 2020), pp. 82-115, at https://www.sciencedirect.com/science/article/pii/S1566253519308103. National Institute of Standards and Technology, Four Principles of Explainable Artificial Intelligence, Draft NISTRIR 8312, August 2020, at https://www.nist.gov/document/fourprinciples-explainable-artificial-intelligence- nistir-8312. Executive Order 13960, “Promoting the Use of Trustworthy Artificial Intelligence in the Federal Government,” 85 Federal Register 78939, December 3, 2020, at https://www. federalregister.gov/documents/2020/12/08/2020-27065/promoting-the-use-of-trustworthyartificial-intelligence-in-the-federal-government. See “ImageNet Large Scale Visual Recognition Challenge,” at http://www.imagenet.org/challenges/LSVRC/.

Artificial Intelligence

19

available, researchers have noted the importance of using specific types of data of requisite quality for various applications of AI technologies, which can be expensive and time consuming to generate (e.g., data that have been digitally stored, cleaned, transformed, labeled, and optimized to be deployed in AI algorithms).61 Associated data management infrastructure requirements can be extensive, including cloud technology, edge computing (computing done closer to the source of the data), and labeling and annotation capacity (human capital).62

Source: David Gunning, DARPA, “Explainable Artificial Intelligence (XAI) Program Update,” November 2017, at https://web.archive.org/web/20200501004458/ https://www.darpa.mil/attachments/XAIProgramUpdate.pdf. Figure 2. Examples of Non-Explainable and Explainable AI Systems.

While big data sets continue to be instrumental in various AI advances, some have raised concerns that such datasets are increasingly held by private companies and argued for more publicly available datasets and incentives for technology companies to share proprietary datasets. One study asserted that “As long as large firms have both the computational resources and the access to proprietary datasets to combine with open data, they are likely to maintain 61

Husanjot Chahal, Ryan Fedasiuk, and Carrick Flynn, Messier Than Oil: Assessing Data Advantage in Military AI, Center for Security and Emerging Technology, July 2020. 62 Ibid.

20

Laurie A. Harris

a competitive advantage.”63 Concerns about private-sector competition and innovation constraints have been noted particularly for AI researchers and developers with limited access to data and testing and training resources, such as academic researchers, small businesses, and startups. In response, the Select Committee on Artificial Intelligence of the National Science and Technology Council (NSTC) included “develop shared public datasets and environments for AI training and testing” as a priority area in its AI R&D Strategic Plan in 2016 and the 2019 update.64 Additionally, the February 2019 Executive Order on Maintaining American Leadership in Artificial Intelligence directed the heads of all federal agencies to review their federal data and models to increase access and use by the greater non-federal AI research community in a manner that benefits that community, while protecting safety, security, privacy, and confidentiality. Specifically, agencies shall improve data and model inventory documentation to enable discovery and usability, and shall prioritize improvements to access and quality of AI data and models based on the AI research community’s feedback. 65

Since the national AI R&D strategic plan was first announced in 2016, numerous federal agencies have made varying degrees of progress toward collecting and sharing data. However, challenges remain, such as labeling and curating datasets so that they are useful for AI research, working with AI stakeholders to ensure that datasets and models are fit for use and are maintained as standards and norms evolve, and developing tools to verify data provenance and oversee proper use policies. The strategic plan notes that “data alone are of little use without the ability to bring computational resources to bear on large-scale public datasets.”66 Demonstrating the intensive training needed for some systems, Facebook has described an AI experiment using billions of Instagram photos that required T. Davies et al., (Eds.), “Algorithms and AI,” in State of Open Data: Histories and Horizons, 2019, at https://www.stateofopendata.od4d.net/chapters/issues/artificial-intelligence.html. 64 Select Committee on Artificial Intelligence, National Science and Technology Council, The National Artificial Intelligence Research and Development Strategic Plan: 2019 Update, June 2019, pp. 27-31 (hereinafter, “NSTC Select Committee on Artificial Intelligence 2019 AI R&D Strategic Plan”). See also below under “Federal Activity in AI.” 65 Executive Order 13859, “Maintaining American Leadership in Artificial Intelligence,” 84 Federal Register 3967, February 11, 2019, at https://www.federalregister.gov/ documents/2019/02/14/2019-02544/maintaining-american-leadership-in-artificialintelligence. 66 NSTC Select Committee on Artificial Intelligence 2019 AI R&D Strategic Plan, p. 28. 63

Artificial Intelligence

21

hundreds of graphics chips across 42 servers for almost a month.67 An analysis by the nonprofit OpenAI found that the amount of computing power used for training certain AI systems is now rising seven times faster than it did before about 2012 (doubling every approximately 3.4 months post-2012 versus approximately 2 years pre-2012).68 The OpenAI group recommended that policymakers consider increasing funding for academic research, as some types of AI research are becoming more computationally intensive and expensive.69 Building on federal strategic planning and agency efforts to provide greater access to computational resources and high-quality data to support AI research, Congress directed the Director of the National Science Foundation in coordination with the Office of Science and Technology Policy to establish a National AI Research Resource Task Force through the National Artificial Intelligence Initiative Act of 2020.70 The task force is to include four federal members, four members from academic institutions, and four private sector members. The task force is meant to investigate and report on the feasibility and advisability of establishing and sustaining a National Artificial Intelligence Research Resource, defined as “a system that provides researchers and students across scientific fields and disciplines with access to compute resources, co-located with publicly-available, artificial intelligence-ready government and non-government data sets and a research environment with appropriate educational tools and user support.”71

AI Training with Small and Alternative Datasets Some researchers have responded to the concern over limited access to big datasets for training by focusing on alternative ways to obtain or use data to reduce costs and computing power requirements. One method that has been explored is creating techniques and models that can learn from reduced Tom Simonite, “Your Instagram #Dogs and #Cats Are Training Facebook’s AI,” Wired, May 2, 2018, at https://www.wired.com/story/your-instagram-dogs-and-cats-are-trainingfacebooks-ai/. 68 As reported by Karen Hao, “The Computing Power Needed to Train AI is Now Rising Seven Times Faster than Ever Before,” MIT Technology Review, November 11, 2019, at https://www.technologyreview.com/2019/11/11/132004/the- computing-power-neededto-train-ai-is-now-rising-seven-times-faster-than-ever-before/. 69 OpenAI, “AI and Compute: Addendum,” OpenAI Blog, May 16, 2018, at https://openai. com/blog/ai-and-compute/ #addendum. 70 P.L. 116-283, Division E, Section 5106 71 P.L. 116-283, Division E, Section 5106(g). According to information on AI.gov, information about members and meetings of the task force will be announced and posted once it is established; see https://www.ai.gov/nairrtf/ #MEMBERS. 67

22

Laurie A. Harris

amounts of data or fewer training iterations. For example, researchers at Google DeepMind created AI software that initially needs to analyze several hundred categories of images, but afterwards can learn to recognize new objects from just one picture—called “one-shot learning.”72 Additional approaches include using alternative datasets and techniques. Some startups have reportedly created synthetic data to generate a large enough dataset for training AI models.73 Others have demonstrated the promise of relatively unknown or novel AI techniques. For example, in recent years, some AI technologies developed by smaller AI groups have outperformed technologies from large companies such as Google and Intel in certain benchmark measures at Stanford University’s DAWNBench challenge.74 One report on this competition states these metrics [such as cost and algorithm speed] help us judge whether small teams can take on the tech giants. The results don’t give a straightforward answer, but they suggest that raw computing power isn’t the be-all and end-all for AI success. Ingenuity in how you design your algorithms counts for at least as much. While big tech companies like Google and Intel had predictably strong showings in a number of tasks, smaller teams (and even individuals) ranked highly by using unusual and little-known techniques.”75

Will Knight, “Machines Can Now Recognize Something After Seeing It Once,” MIT Technology Review, November 3, 2016, at https://www.technologyreview.com/2016/11/03/ 6485/machines-can-now-recognize-something-after-seeing- it-once/. 73 Tom Simonite, “Some Startups Use Fake Data to Train AI,” Wired, April, 25, 2018, at https://www.wired.com/story/ some-startups-use-fake-data-to-train-ai/. 74 The DAWNbench challenge is an AI engineering competition in which teams and individuals from universities, governments, and industry compete to design the best algorithms, with Stanford’s researchers acting as adjudicators. Each entry must meet basic accuracy standards (for example, recognizing 93% of dogs in a given dataset) and is judged on metrics training time and cost. See James Vincent, “An AI Speed Test Shows Clever Coders Can Still Beat Tech Giants Like Google and Intel,” The Verge, May 7, 2018, at https://www.theverge.com/2018/5/7/17316010/fast-ai-speed-test-stanford-dawnbenchgoogle-intel. 75 James Vincent, “An AI Speed Test Shows Clever Coders Can Still Beat Tech Giants Like Google and Intel,” The Verge, May 7, 2018, at https://www.theverge.com/2018/5/ 7/17316010/fast-ai-speed-test-stanford-dawnbench-google- intel. 72

Artificial Intelligence

23

AI Hardware Hardware advances have played another key role in AI progress over the past decade, and hardware development—including AI chips and high performance computing (HPC) for AI applications—is an active research area. According to data from CB Insights, global equity funding for AI chip startups rose from just over $200 million from 13 deals in 2016 to approximately $700 million from 30 deals in 2018.76 Companies including Nvidia, Google, Microsoft, and Facebook have been working on AI chip R&D, including developing chips designed for specialized tasks and designed to optimize energy efficiency for particular AI applications.77 One of the largest recent efforts in the United States to use HPC for AI applications comes from a partnership between the DOE’s Oak Ridge National Laboratory (ORNL) and IBM to create the Summit supercomputer. Summit contains “AI-optimized” graphical processing units (GPUs) and has been described as “a supercomputer suited for AI.”78 The type and large number of chips allow it to run intensive ML techniques, such as DL.79 Sector Example: AI in Healthcare Numerous companies and researchers have been developing and testing AI technologies for use in healthcare, for example, to detect diabetic retinopathy (an eye condition that can cause blindness in diabetic patients) and skin cancer, and to mine large quantities of medical data to derive insights. 80 Some Data from CB Insights as reported in Richard Waters, “Facebook Joins Amazon and Google in AI Chip Race,” Financial Times, February 18, 2019, at https://www.ft.com/content/1c2aab18-3337-11e9-bd3a8b2a211d90d5. 77 Ibid. 78 Department of Energy, Oak Ridge National Laboratory, “Summit,” at https:// www.olcf.ornl.gov/summit/, and “ORNL Launches Summit Supercomputer,” news release, June 8, 2018, at https://www.ornl.gov/news/ornl-launches-summit-supercomputer. 79 Tom Simonite, “The US Again Has the World’s Most Powerful Supercomputer,” Wired, June 8, 2018, https://www.wired.com/story/the-us-again-has-worlds-most-powerfulsupercomputer. 80 Google, “Seeing Potential: How a Team at Google Is Using AI to Help Doctors Prevent Blindness in Diabetics,” at https://www.google.com/about/stories/seeingpotential/; Melanie Evans and Laura Stevens, “Big Tech Expands Footprint in Health,” November 27, 2018, at https://www.wsj.com/articles/amazon-starts-selling-software-to-mine- patienthealth-records-1543352136; and H.A. Haenssle et al., “Man Against Machine: Diagnostic Performance of a Deep Learning Convolutional Neural Network for Dermoscopic Melanoma Recognition in Comparison to 58 Dermatologists,” Annals of Oncology, vol. 29, no. 8 (August 1, 2018), pp. 1836-1842. 76

24

Laurie A. Harris

hospitals are also experimenting with using voice recognition, and associated ML and NLP technology, to assist doctors and patients.81 Growth in AI and its potential healthcare applications has led to the development of various partnerships among public and private sector groups. In 2019, for example, established pharmaceutical companies partnered with startups and researchers working on AI use for drug discovery and development.82 Federal agencies have also begun assessing the potential for AI in certain settings, such as drug discovery and clinical trials,83 and working with the private sector to evaluate the use of AI systems. For example, a partnership between the Department of Veterans Affairs and DeepMind has worked to identify risk factors for patient deterioration during hospitalization in an effort to develop early interventions and improve care. 84 Further, the Food and Drug Administration has been developing a framework for regulating AI- and ML-based software as a medical device and addressing subsequent modifications to such software. 85 While there are many encouraging developments for using AI technologies in healthcare, stakeholders have remarked on the slow progress in using AI broadly within healthcare settings, and various challenges and questions remain. Researchers and clinicians have raised questions about the accuracy, security, and privacy of these technologies; the availability of sufficient health data on which to train systems; medical liability in the event of adverse outcomes; patient access and receptivity; and the adequacy of current user consent processes.86 A 2019 literature review and meta-analysis of the performance of DL systems compared to medical professionals in Ruth Hailu, “5 Burning Questions About Deploying Voice Recognition Technology in Health Care,” STAT News, July 10, 2019, at https://www.statnews.com/2019/07/10/5-questionsvoice-recognition-technology/. 82 Robert Langreth, “AI Drug Hunters Could Give Big Pharma a Run for Its Money, Bloomberg, July 15, 2019, at https://www.bloomberg.com/news/features/2019-07-15/google-ai-couldchallenge-big-pharma-in-drug-discovery. 83 Government Accountability Office, Artificial Intelligence in Health Care: Benefits and Challenges of Machine Learning in Drug Development, GAO-20-215SP, January 21, 2020, at https://www.gao.gov/products/GAO-20-215SP. 84 Department of Veterans Affairs, Office of Public and Intergovernmental Affairs, “VA Partners With DeepMind to Build Machine Learning Tools to Identify Health Risks for Veterans,” February 21, 2018, at https://www.va.gov/opa/ pressrel/pressrelease.cfm?id=4013. 85 See U.S. Food and Drug Administration, “Artificial Intelligence and Machine Learning in Software as a Medical Device,” at https://www.fda.gov/medical-devices/software-medicaldevice-samd/artificial-intelligence-and-machine- learning-software-medical-device. 86 Ruth Hailu, “5 Burning Questions About Deploying Voice Recognition Technology in Health Care,” STAT News, July 10, 2019, at https://www.statnews.com/2019/07/10/5-questionsvoice-recognition-technology/; and Lauren Joseph, “5 Burning Questions About Using Artificial Intelligence to Prevent Blindness,” STAT News, July 17, 2019, at https://www.statnews.com/2019/07/17/artificial-intelligence-to-prevent-blindness/. 81

Artificial Intelligence

25

detecting disease from medical imaging concluded that few of the 82 identified studies presented externally validated results and “poor reporting is prevalent in deep learning studies, which limits reliable interpretation of the reported diagnostic accuracy,” concluding that new reporting standards could improve future studies.87

Federal Activity in AI In recent years, the federal government—including the White House, federal agencies, and Congress—has increasingly supported and conducted AI R&D, invested in AI technologies, and worked to address issues with AI development and use. AI has been of interest to Congress since at least the 1980s and congressional AI activities, including legislation and oversight hearings, increased in the 115th and 116th Congresses.88 This section of the report focuses on selected federal activities during the Administrations of Donald J. Trump and Barack Obama and in the 115th and 116th Congresses.

Executive Branch The Trump and Obama Administrations took a variety of actions related to AI, by establishing initiatives through executive order, forming committees, and releasing reports. Further, in accordance with the National Artificial Intelligence Initiative Act of 2020 (P.L. 116-283, Division E, as described in the “Legislation” section), the Office of Science and Technology Policy (OSTP) launched the National AI Initiative Office (NAIIO) on January 12, 2021, to coordinate and support the National AI Initiative (the act is further described in the “Legislation” section).89

Xiaoxuan Liu et al., “A Comparison of Deep Learning Performance Against Health-Care Professionals in Detecting Diseases from Medical Imaging: A Systematic Review and Meta-Analysis,” The Lancet, vol. 1, no. 6 (October 1, 2019), pp. E271-E297. 88 For example, see U.S. Congress, Subcommittee on Investigations and Oversight, Committee on Science and Technology, U.S. House of Representatives, Robotics, 97th Congress, 2nd sess., June 2 and 23, 1982 (Washington, DC: GPO, 1983). 89 See information provided by National Artificial Intelligence Initiative Office, “NAIIO— National Artificial Intelligence Initiative Office,” at https://www.ai.gov/about/ #NAIIO_National_Artificial_Intelligence_Initiative_Office. The “AI.gov” website was originally launched by the Trump Administration; a new version of the website was launched by the Biden Administration on May 5, 2021. 87

26

Laurie A. Harris

Executive Orders on AI In February 2019, President Trump released an executive order establishing the American AI Initiative (E.O. 13859).90 In addition to promoting AI R&D investment and coordination, objectives of the E.O. include making federal data, models, and computing resources available for AI development, reducing barriers to the use of AI technologies, developing technical and international standards around AI innovation, preparing an action plan around AI and national security concerns, and training the workforce to develop and use AI. In December 2020, President Trump released an executive order promoting the use of trustworthy AI in the federal government (E.O. 13960).91 The E.O. establishes a common set of principles for the design, development, acquisition, and use of AI in the federal government to foster public trust and confidence, and directs the Office of Management and Budget (OMB) to develop policy guidance for implementing the principles across agencies. The E.O. further includes direction to federal agencies (1) to provide annual, publicly-available inventories of nonclassified, nonsensitive use cases of AI, and (2) to undertake activities to expand the number of AI experts at federal agencies, including through creating an AI track within the Presidential Innovation Fellows program and by assessing potential expansion of federal rotational programs. National Science and Technology Council Committees The National Science and Technology Council (NSTC) convenes federal science and technology leaders as a primary means within the executive branch to coordinate science and technology policies across federal agencies.92 The Trump Administration established a new committee and expanded on committees and working groups established by the Obama Administration, with the following NSTC bodies coordinating cross-agency efforts in AI and ML: 

90

91

92

The Select Committee on Artificial Intelligence was established in May 2018 and rechartered on January 5, 2021 “in accordance with the

Executive Order 13859, “Maintaining American Leadership in Artificial Intelligence,” 84 Federal Register 3967, February 11, 2019. Executive Order 13960, “Promoting the Use of Trustworthy Artificial Intelligence in the Federal Government,” 85 Federal Register 78939, December 8, 2020, at https://www.federalregister.gov/d/2020-27065. For additional information on the NSTC, see CRS Report R43935, Office of Science and Technology Policy (OSTP): History and Overview, by John F. Sargent Jr. and Dana A. Shea.

Artificial Intelligence





27

National Artificial Intelligence Act of 2020 … with a broader scope and membership.” The Committee is comprised of heads of agencies and advises the White House on interagency AI R&D priorities; provides a formal mechanism for interagency policy coordination and the development of federal AI activities; and addresses national and international AI policy matters.93 The ML and AI (MLAI) Subcommittee is the operations and implementation arm of the Select Committee on Artificial Intelligence and includes federal employees with budgetary decisionmaking responsibilities to help focus priorities for AI investments through agency programs. The AI Interagency Working Group is a community of practice,94 taking on tasks that require deep expert knowledge and producing products such as the AI R&D Strategic Plan and its updates.95

Select AI Reports and Documents As federal government interest and engagement in AI has grown, the executive branch has included a focus on AI in a variety of strategic plans, reports, and memoranda, including the following. 

93

The NSTC first released the National AI Research and Development Strategic Plan in 2016 with seven strategic priorities.96 In September 2018, NITRD’s National Coordination Office requested input from the public on whether and how the plan should be revised and

For additional information on the NSTC Select Committee on Artificial Intelligence, see the January 5, 2021 charter, at https://trumpwhitehouse.archives.gov/wp-content/uploads/ 2021/01/Charter-Select-Committee-on-AI-Jan-2021- posted.pdf. 94 A community of practice is generally a group of professionals who are active in, or interested in, a particular craft or profession. For example, the General Service Administration (GSA) also leads an AI community of practice to “bring together federal employees who are active in, or interested in, AI policy technology, standards, and programs to facilitate the sharing of best practices, use cases, and lessons learned; and [to] advance and share tools, playbooks success stories with a community of interested professionals.” Steven Babitch, “GSA Launches Artificial Intelligence Community of Practice,” GSA Blog, November 5, 2019, at https://www.gsa.gov/blog/2019/11/05/gsa-launches- artificial-intelligence-community-ofpractice. 95 Overviews of the activities of each body include descriptions provided during a telephone conversation between CRS and Dr. Lynne Parker, Deputy Chief Technology Officer of the United States, March 2019. 96 National Science and Technology Council, Networking and Information Technology Research and Development Subcommittee, The National Artificial Intelligence Research and Development Strategic Plan, October 2016.

28

Laurie A. Harris





improved.97 In response, various industry groups requested more detail on federal priorities in AI R&D—including on specific challenges, applications, ways to incorporate private sector participation, and goals for investments from both technical and social impact perspectives. Some groups also asserted a need to align federal plans for enabling technologies such as 5G and quantum computing with the AI strategy.98 In June 2019, NSTC released an updated plan with eight strategic priorities, the last of which was new: (1) make long-term investments in AI research; (2) develop effective methods for human-AI collaboration; (3) understand and address ethical, legal, and societal implications of AI; (4) ensure the safety and security of AI systems; (5) develop shared public datasets and environments for AI training and testing; (6) measure and evaluate AI technologies through standards and benchmarks; (7) better understand the national AI R&D workforce needs; and (8) expand public- private partnerships to accelerate advances in AI.99 In August 2019, in response to E.O. 13859, NIST released the Plan for Federal Engagement in Developing Technical Standards and Related Tools in AI. NIST noted that the plan was prepared with broad public and private sector input. It includes recommendations for federal government activities to engage in deep, long-term AI standards development “to speed the pace of reliable, robust, and trustworthy AI technology development.”100 In August 2020, OMB and OSTP provided their annual memorandum to the heads of federal R&D agencies laying out the Administration’s R&D budget priorities for FY2022. The memorandum stated that industries of the future— including AI—remained a top R&D priority for the Administration, as in prior years.101

NITRD National Coordination Office, “Request for Information on Update to the 2016 National Artificial Intelligence Research and Development Strategic Plan,” 83 Federal Register 48655, September 26, 2018. 98 MeriTalk, “Industry Wants More Detail on AI R&D Plan,” December 21, 2018, at https://www.meritalk.com/ articles/industry-wants-more-detail-on-ai-rd-plan/. 99 NSTC Select Committee on Artificial Intelligence 2019 AI R&D Strategic Plan. 100 National Institute of Standards and Technology, U.S. Leadership in AI: A Plan for Federal Engagement in Developing Technical Standards and Related Tools, August 9, 2019, pp. 36. 101 Office of Management and Budget and Office of Science and Technology Policy, “Memorandum for the Heads of Executive Departments and Agencies: Fiscal Year (FY) 2022 Administration Research and Development Budget Priorities and Cross-cutting 97

Artificial Intelligence



29

In November 2020, OMB released a memorandum to the heads of federal agencies providing guidance for the regulation of AI. The purpose of the memo was to guide regulatory and nonregulatory oversight of AI applications developed and deployed outside of the federal government. It lays out 10 principles for the stewardship of AI applications, including topics such as risk assessment, fairness and nondiscrimination, disclosure and transparency, and interagency coordination. It further touches on reducing barriers to the deployment and use of AI, including increasing access to government data, communicating benefits and risks to the public, engaging in the development and use of voluntary consensus standards, and engaging in international regulatory cooperation efforts. Agency plans to conform to the memorandum are due on May 17, 2021, and must include any statutory authorities governing agency regulation of AI applications, information collections on AI from regulated entities, regulatory barriers to AI applications, and any planned or considered regulatory actions on AI.102

In addition to the initial National AI R&D Strategic Plan, two other background documents on AI were also prepared in 2016 by the NSTC and other offices in the Executive Office of the President. These reports were Preparing for the Future of Artificial Intelligence, and Artificial Intelligence, Automation, and the Economy.103 Federal Agency Activities Engagement on AI varies across agencies and may include examining and adopting AI technologies for internal agency use, holding hearings to examine issues surrounding the development and use of AI,104 conducting AI R&D inActions” August 14, 2020, at https://www.whitehouse.gov/wp-content/uploads/ 2020/08/ M- 20-29.pdf. 102 102 Russell Vought, Director of the Office of Management and Budget, “Guidance for Regulation of Artificial Intelligence Applications,” Memorandum for the heads of executive departments and agencies, November 17, 2020, at https://www.whitehouse.gov/wpcontent/uploads/2020/11/M-21-06.pdf. 103 Available at https://obamawhitehouse.archives.gov/sites/default/files/whitehouse _files/ microsites/ostp/NSTC/preparing_for_the_future_of_ai.pdf; and https://obamawhitehouse. archives.gov/sites/whitehouse.gov/files/documents/Artificial-Intelligence-AutomationEconomy.PDF. 104 For example, the Federal Trade Commission (FTC) held a hearing in November 2018 focused on consumer welfare implications associated with the use of algorithmic decision tools, AI,

30

Laurie A. Harris

house (intramural R&D), and funding AI R&D by outside groups (extramural R&D), including at institutions of higher education (IHEs), nonprofits, and industry. E.O. 18539 directed federal R&D agencies to “promote sustained investment in AI R&D in collaboration with industry, academia, international partners and allies, and other non-Federal entities” and the heads of those agencies to consider AI as an R&D priority when preparing their budget requests to Congress. E.O. 13960 highlighted a range of ways that federal agencies are already employing AI, including identifying information security threats, facilitating review of large datasets, streamlining processes for grant applications, modeling weather patterns, and facilitating predictive maintenance. Although there are numerous examples of federal agencies using AI inhouse, there is currently no comprehensive database of AI projects within agencies, though some recent efforts have attempted to better compile such information.105 The General Services Administration (GSA) has reportedly been working to catalogue some use cases of AI across the federal government.106 Additionally, the Administrative Conference of the United States (ACUS) commissioned a study, completed in February 2020, “to map how federal agencies are currently using AI to make and support decisions.”107 Among 142 federal agencies, the study authors identified use cases— defined as “instance[s] in which an agency had considered using or had already deployed AI/ML technology to carry out a core function”—in 64 (45%) agencies, based on searches of publicly available information.108 Of the 157 use cases, the authors noted that 84 (53%) were built in- house, rather than being procured through private contracting or noncommercial collaboration (e.g., with an academic laboratory or through a public-facing competition).109 Building on this initial study, E.O. 13960 requires federal agencies to create

and predictive analytics; see https://www.ftc.gov/ news-events/events-calendar/ftchearing-7-competition-consumer-protection-21st-century. 105 CRS communications with the Office of Science and Technology Policy, February 2020; and David Freeman Engstrom et al., Government by Algorithm: Artificial Intelligence in Federal Administrative Agencies, Report delivered to the Administrative Conference of the United States, February 2020. 106 CRS communications with the Office of Science and Technology Policy, February 2020. 107 See Administrative Conference of the United States, Office of the Chairman Projects, “Artificial Intelligence in Federal Agencies,” February 2020, at https://www.acus.gov/ research-projects/artificial-intelligence-federal-agencies (hereinafter, “ACUS report 2020”). 108 Ibid., pp. 15-18. The authors limited the included agencies to those with over 400 employees and excluded active military and intelligence-related organizations. 109 Ibid. p. 18.

Artificial Intelligence

31

publicly available inventories of use cases of AI, based on common criteria, format, and inventory mechanisms created by the Federal Chief Information Officers Council. Some examples of federal agencies using AI in-house include the following: 







The Department of Health and Human Services used AI and NLP technologies to identify incorrect citations and outdated regulations in the Code of Federal Regulations as part of a “department-wide regulatory clean-up initiative.”110 NASA launched RPA pilot projects in accounts payable and receivable, IT spending, and human resources. The projects appeared to work well—in the human resources application, for example, 86% of transactions were completed without human intervention—and are being rolled out across the organization. NASA reportedly moved forward with implementing more RPA bots, some with higher levels of intelligence.111 The National Oceanic and Atmospheric Administration (NOAA) has developed an AI strategy to “expand the application of [AI] in every NOAA mission area by improving the efficiency, effectiveness, and coordination of AI development and usage across the agency.”112 The Social Security Administration has used AI/ML in its adjudication work to address challenges from high caseloads and in ensuring accuracy and consistency of decisionmaking, which have reportedly persisted through decades of quality improvement efforts.113

Considerations for agency adoption of AI mirror private sector considerations—namely, how can AI be used as a tool to advance process Department of Health and Human Services, “HHS Launches First-of-Its-Kind Regulatory Clean-Up Initiative Utilizing AI,” November 17, 2020, at https://www.hhs.gov/ about/news/2020/11/17/hhs-launches-first-its-kind-regulatory-clean-up-initiative-utilizingai.html. 111 Thomas H. Davenport and Rajeev Ronanki, “Artificial Intelligence for the Real World,” Harvard Business Review, January-February 2018, pp. 108-116, at https://hbr.org/ 2018/01/artificial-intelligence-for-the-real-world. 112 National Oceanic and Atmospheric Administration, NOAA Artificial Intelligence Strategy: Analytics for Next- Generation Earth Science, February 2020, at https://nrc.noaa.gov/ LinkClick.aspx?fileticket=0I2p2-Gu3rA%3d&tabid= 91&portalid=0. 113 Administrative Conference of the United States, “Artificial Intelligence in Federal Agencies,” February 2020, pp. 38-39. 110

32

Laurie A. Harris

automation, provide insight into data analyses, and improve services (i.e., improve timeliness and enhance citizen interactions with federal agencies, such as through the use of chatbots). Technology leaders in federal agencies, industry, and academia have argued that the initial implementation of AI technologies should be evaluated in terms of challenges and opportunities associated with an agency’s current data collection, management, and analysis processes, rather than the capabilities of AI systems themselves.114 Additional considerations include how to evaluate and acquire AI systems. To further guide agencies, E.O. 13960 provides broad principles for federal design, development, acquisition, and use of AI, including that AI systems should be (1) lawful and respectful of the nation’s values; (2) purposeful and performance-driven; (3) accurate, reliable, and effective; (4) safe, secure and resilient; (5) understandable; (6) responsible and traceable; (7) regularly monitored; (8) transparent; and (9) accountable. Given that OMB is tasked with developing, by June 2021, a roadmap for policy guidance to better support federal government use of AI, more concrete plans and actions may be specified across agencies. The National Science Foundation (NSF) has been a primary nondefense source of federal extramural support for AI R&D for decades and currently “supports fundamental research, education and workforce development, and advanced, scalable computing resources that collectively enhance fundamental research in AI.”115 Fundamental AI research areas include how computer systems represent knowledge; learn; process spoken and written language; and solve problems, as well as the impacts of AI on continuing education and adult retraining.116 Additional federal agency activities in AI R&D include  

114

NIST engaged in national and international AI standards development activities; DARPA launched the AI Next campaign, focused on “improving the robustness and reliability of AI systems; enhancing the security and resiliency of machine learning and AI technologies; reducing power,

See remarks by Stephen Dennis, Director of the Data Analytics Engine, Science and Technology Directorate, Department of Homeland Security at the FCW Workshop, “Artificial Intelligence: Moving from Vision to Implementation,” March 13, 2018; and Davenport and Ronanki, 2018. 115 Information about NSF support for AI research and workforce programs and interagency work can be found at “Artificial Intelligence at NSF,” at https://www.nsf.gov/cise/ai.jsp. 116 NSF, “Statement on Artificial Intelligence for American Industry,” press statement 18-005, May 10, 2018, at https://www.nsf.gov/news/news_summ.jsp?cntn_id=245418.

Artificial Intelligence







33

data, and performance inefficiencies; and pioneering the next generation of AI algorithms and applications, such as ‘explainability’ and common sense reasoning”;117 DOE established the Artificial Intelligence and Technology Office to “accelerate the delivery of AI-enabled capabilities, scale the department-wide development and impact of AI, and synchronize AI activities to advance the agency’s core missions, expand partnerships, and support American AI Leadership”;118 The Department of Veteran’s Affairs (VA) established a National Artificial Intelligence Institute (NAII) to develop AI R&D capabilities in the VA;119 and The National Institute for Justice—the research wing of the Department of Justice—supported research on “crime-fighting AI” which “it believes could be used to fight human trafficking, illegal border crossings, drug trafficking, and child pornography” by helping investigators sort through data.120

Congress The 115th and 116th Congresses focused on AI more frequently and explicitly than previous Congresses, in terms of enacted and introduced legislation and hearings. Additionally, bipartisan AI caucuses were launched in the House and the Senate.121 The AI Index group used data from McKinsey & Company to assess mentions of AI in Congress based on the Congressional Record. The analysis found, after a maximum of 9 mentions in any year from 2011 through

117

118

119

120

121

DARPA, “DARPA Announces $2 Billion Campaign to Develop Next Wave of AI Technologies,” September 7, 2018, at https://www.darpa.mil/news-events/2018-09-07. See U.S. Department of Energy, Artificial Intelligence and Technology Office, at https://www.energy.gov/science- innovation/artificial-intelligence-and-technology-office. See U.S. Department of Veterans Affairs, Office of Research and Development, “National Artificial Intelligence Institute (NAII),” at https://www.research.va.gov/naii/. Kate Conger, “Justice Department Drops $2 Million to Research Crime-Fighting AI,” Gizmodo, February 27, 2018; and DOJ’s solicitation for the program can be found at https://nij.gov/funding/Documents/solicitations/NIJ-2018- 14000.pdf. The House Congressional AI Caucus was originally launched in 2015; see https://artificialintelligencecaucus- olson.house.gov/. The Senate AI Caucus was announced on March 13, 2019; see announcements from the caucus co-chairs at https://www.portman.senate.gov/public/index.cfm/2019/3/portman-heinrich-launchbipartisan-artificial-intelligence-caucus, and https://www.heinrich.senate.gov/pressreleases/heinrich-portman-launch-bipartisan-artificial- intelligence-caucus.

34

Laurie A. Harris

2016, mentions increased each year throughout the 115th and 116th Congresses, with 129 mentions reported in 2020 (Figure 3).122 This section of the report provides a brief summary of legislative activities in the 116th and 117th Congresses, including descriptions of laws and selected bills that focused on, or included specific provisions focused on AI and ML, as well as hearings from the 115th-117th Congresses (as of the date of this report).123

Source: AI Index Steering Committee, The AI Index 2021 Annual Report, HumanCentered AI Institute, Stanford University, Stanford, CA, March 2021, pp. 171172; data from the McKinsey Global Institute, 2020. Notes: Per the AI Index 2021 Annual Report, each count indicates that AI or ML was mentioned during a particular event contained in the Congressional Record, including the reading of a bill. If a speaker or member mentioned AI or ML multiple times within remarks, or multiple speakers mentioned AI or ML within the same event, it appears only once as a result. Counts for AI and ML are separate, as they were conducted in separate searches. Mentions of the abbreviations “AI” or “ML” are not included. Additional information about the search methodology is included in the AI Index 2021 Annual Report appendix, p. 216. Figure 3. Mentions of Artificial Intelligence and Machine Learning in the Congressional Record, 2011-2020.

122 123

AI Index 2021 Annual Report, p. 172. Additional bills mentioned AI or ML without including specific provisions related to the technologies. For example, the Developing Innovation and Growing the Internet of Things Act (S. 1611, 116th Congress) stated in the findings that “the Internet of things will … play a key role in developing artificial intelligence and advanced computing capabilities,” but AI was not included anywhere else in the bill. Such bills are not discussed in this section.

Artificial Intelligence

35

Legislation As of the date of this report, multiple bills introduced in the 117th Congress have included language about AI, either as a focus of the bill or in a specific provision, though no legislation has been enacted. Some bills have included AI as one of multiple key technology areas important for U.S. competitiveness.124 Other bills have focused on federal AI expertise; addressed potential bias in automated decision systems that may use AI; or included AI as a technology with potential applications in healthcare.125 At least four laws enacted in the 116th Congress focused on AI or included AI-focused provisions. The FY2021 NDAA included multiple sections related to Department of Defense (DOD) AI activities in R&D, acquisitions, and workforce expansion and training. These sections built on prior direction in the FY2020 NDAA, which included provisions related to recruiting expertise at the DOD Joint Artificial Intelligence Center (JAIC); establishing DOD processes to update policies on emerging technologies, including AI; extending authorization for the National Security Commission on Artificial Intelligence; and requiring an analysis of major initiatives of the intelligence community in AI and ML. Further, the FY2021 NDAA incorporated the expansive National Artificial Intelligence Act of 2020 (Division E), which included sections related to  





124

codifying the establishment of an American AI Initiative (Section 5101); establishing the National AI Initiative Office to support federal AI activities, including technical, programmatic, and administrative support for activities of the AI Initiative, as specified (Section 5102); establishing an Interagency Committee at OSTP to coordinate federal programs and activities in support of the AI Initiative, including developing periodic strategic plans for AI (Section 5103);126 establishing a National AI Advisory Committee with representatives from academic institutions, companies, nonprofit and civil society

For the 117th Congress, see, for example, the Endless Frontier Act (S. 1260) and the Strategic Competition Act of 2021 (S. 1169), the STRATEGIC Act (S. 687), and the Democracy Technology Partnership Act (S. 604). 125 For the 117th Congress, see, for example, “A bill to establish a Federal artificial intelligence scholarship-for-service program” (S. 1257), the Unemployment Insurance Technology Modernization Act of 2021 (S. 490); the Black Maternal Health Momnibus Act of 2021 (S. 346 and H.R. 959); and the Tech to Save Moms Act (H.R. 937). 126 This section effectively expanded on and codified the NSTC Select Committee on Artificial Intelligence that was established in the Trump Administration.

36

Laurie A. Harris











 

127

entities, and federal laboratories to provide to the President and the AI Initiative Office “advice and information on science and technology research, development, ethics, standards, education, technology transfer, commercial application, security, and economic competitiveness” related to AI (Section 5104(a)); establishing as part of the National AI Advisory Committee a Subcommittee on AI and Law Enforcement to provide advice on bias, data security, adoptability, and legal standards (Section 5104(e)); directing NSF to contract with the National Academies of Sciences, Engineering, and Medicine to conduct a study on the current and future impact of AI on the U.S. workforce across sectors (Section 5105); establishing a task force to investigate the feasibility of, and plan for, a National AI Research Resource, defined as “a system that provides researchers and students across scientific fields and disciplines with access to compute resources, co-located with publicly-available, AIready government and non-government data sets and a research environment with appropriate educational tools and user support” (Section 5106); directing NSF to establish a program to support a network of National AI Research Institutes, which shall be public-private partnerships that focus on a particular economic or social sector and associated ethical, societal, safety, and security implications, or a cross-cutting challenge for AI systems, with the potential to create or enhance innovation ecosystems and support interdisciplinary R&D, education, and workforce development in AI (Section 5201);127 directing NIST to support AI standards development, develop a risk management framework for trustworthy AI systems, and develop best practices for documenting and sharing data sets used to train AI systems (Section 5301); directing the NOAA to establish a Center for AI (Section 5303); directing NSF to fund research and education activities in AI and related fields (Section 5401); and

NSF began funding National AI Research Institutes in FY2020 in a joint effort with the U.S. Department of Agriculture National Institute of Food and Agriculture, the Department of Homeland Security Science and Technology Directorate, and the Department of Transportation Federal Highway Administration; see NSF’s program description page at https://www.nsf.gov/funding/pgm_summ.jsp?pims_id=505686.

Artificial Intelligence



37

directing the DOE to carry out a cross-cutting R&D program to advance AI tools, systems, capabilities, and workforce needs and to improve the reliability of AI methods and solutions relevant to DOE’s mission (Section 5501).

The Consolidated Appropriations Act, 2021 (P.L. 116-260) included the AI in Government Act of 2020 (Division U, Title I), which created within the General Services Administration (GSA) an AI Center of Excellence (CoE) to facilitate the adoption of AI technologies in the federal government.128 The AI CoE is further required, among other activities, to collect, aggregate, and publish on a publicly available website information regarding federal programs, pilots, and other initiatives; and to advise federal agencies on the acquisition and use of AI through technical insight and expertise. The act required OMB to issue a memorandum to federal agencies regarding the development of AI policies, approaches for removing barriers to using AI technologies, and best practices for identifying, assessing, and mitigating any discriminatory impact or bias and any unintended consequences of using AI. Additionally, the act required the Office of Personnel Management to establish or update an occupational job series to include positions with primary duties in AI and to estimate current and future numbers of federal employment positions related to AI at each agency. The Further Consolidated Appropriations Act, 2020 (P.L. 116-94) included a provision amending the Export-Import Bank Act of 1945 to establish a Program on China and Transformational Exports (Section 402). This program is directed to support the extension of loans, guarantees, and insurance that aim to “advance the comparative leadership of the United States with respect to the People’s Republic of China, or support United States innovation, employment, and technological standards, through direct exports in” artificial intelligence, among other areas. The Identifying Outputs of Generative Adversarial Networks Act (P.L. 116-258) directed NSF and NIST to support research on generative adversarial networks, including research on manipulated or synthesized content and information authenticity and the development of measurements and standards necessary to accelerate the development of technical tools to examine the function and outputs of GANs.

128

The act codified the GSA AI Center of Excellence that was launched in 2019; see https://www.ai.gov/legislation- and-executive-orders/.

38

Laurie A. Harris

Multiple additional bills introduced in the 116th Congress address AI applications, such as facial recognition and deepfakes,129 and areas in which AI is deployed, including law enforcement and criminal justice, healthcare, energy efficiency, natural resources, and defense and national security.130 Some of these bills are focused on AI, while others include AI-specific provisions as part of a broader focus.

Hearings Various committees in both the House of Representatives and the Senate held hearings focused on issues in AI and ML during the 115th, 116th, and 117th Congresses. Given its many, wide-ranging applications, the topic of AI has arisen as a consideration during numerous hearings. Hearing subjects with an explicit focus on AI and ML have ranged from broad considerations of AI and ML technologies and policies, including societal and ethical issues,131 international research and competition,132 and national security,133 to more focused topics, such as use by the federal government,134 potential impact to

129

For the 116th Congress, see, for example, the Ethical Use of Facial Recognition Act (S. 3284); the Facial Recognition Technology Warrant Act of 2019 (S. 2878); the Facial, Analysis, Comparison, and Evaluation (FACE) Protection Act of 2019 (H.R. 4021); the Commercial Facial Recognition Privacy Act of 2019 (S. 847); the Deepfakes Report Act (H.R. 3600 and S. 2065); and the Deep Fake Detection Prize Competition Act (H.R. 5532). 130 For the 116th Congress, see, for example, the Advancing Innovation to Assist Law Enforcement Act (H.R. 2613); the Black Maternal Health Momnibus Act of 2020 (S. 3424, H.R. 6142); the Department of Energy Veterans’ Health Initiative Act (S. 143 and H.R. 617); the Securing American Leadership in Science and Technology Act of 2020 (H.R. 5685); and the BLUE GLOBE Act (H.R. 3548), in addition to the aforementioned provisions in the National Defense Authorization Acts in FY2019 (P.L. 115-232) and FY2020 (P.L. 116-92). 131 U.S. Congress, House Committee on Science, Space, and Technology, Artificial Intelligence: Societal and Ethical Implications, 116th Cong., 1st sess., June 26, 2019. 132 U.S. Congress, Joint U.S.-China Economic and Security Review Commission, Hearing on Technology, Trade, and Military-Civil Fusion: China’s Pursuit of Artificial Intelligence, New Materials, and New Energy, 116th Cong., 1st sess., June 7, 2019. 133 For example, U.S. Congress, Senate Committee on Armed Services, Emerging Technologies and Their Impact on National Security, 117th Cong., 1st sess., February 23, 2021, at https://www.armed-services.senate.gov/hearings/21-02-23-emerging-technologies-andtheir-impact-on-national-security. 134 For example, U.S. Congress, House Committee on Science, Space, and Technology, Subcommittee on Research and Technology and Subcommittee on Energy, Artificial Intelligence: With Great Power Comes Great Responsibility, 115th Cong., 2nd sess., June 26, 2018; and U.S. Congress, Senate Committee on Armed Services, Subcommittee on Emerging Threats and Capabilities, Artificial Intelligence Initiatives Within the Department of Defense, 116th Cong., 1st sess., March 12, 2019.

Artificial Intelligence

39

the U.S. workforce,135 and consequences for human rights.136 Hearings have also focused on specific AI applications, such as facial recognition and deepfakes,137 and contact tracing for COVID-19 cases,138 as well as use areas, such as financial services139 and counterterrorism.140 Additionally, in the 115th Congress, the House Committee on Oversight and Government Reform held a series of three hearings focusing on AI: “Game Changers: Artificial Intelligence Part 1” on February 14, 2018; “Game Changers: Artificial Intelligence Part II, Artificial Intelligence and the Federal Government” on March 7, 2018; and “Game Changers: Artificial Intelligence and Public Policy” on April 18, 2018. Subsequently, the chairman and ranking member of the Subcommittee on Information Technology released a white paper summarizing lessons learned from the hearings and related oversight activities, as well as recommendations for the federal government in moving forward on AI. Broadly, the recommendations included increased engagement on AI by Congress and the Administration, including increased federal R&D funding; increased stakeholder engagement in developing strategies to improve worker education, training, and reskilling; agency reviews of federal privacy laws and regulatory frameworks; and assurance that AI systems are

135

U.S. Congress, House Committee on the Budget, Machines, Artificial Intelligence, and the Workforce: Recovering and Readying Our Economy for the Future, 116th Cong., 2nd sess., September 10, 2020; and U.S. Congress, House Committee on Science, Space, and Technology, Subcommittee on Research and Technology, Artificial Intelligence and the Future of Work, 116th Cong., 1st sess., September 24, 2019. 136 U.S. Congress, House of Representatives, Tom Latos Human Rights Commission, Artificial Intelligence: The Consequences for Human Rights, 115th Cong., 2nd sess., May 22, 2018. 137 U.S. Congress, House Permanent Select Committee on Intelligence, National Security Challenges of Artificial Intelligence, Manipulated Media, and “Deepfakes,” 116th Cong., 1st sess., June 13, 2019. 138 U.S. Congress, House Committee on Financial Services, Task Force on Artificial Intelligence, Virtual Hearing— Exposure Notification and Contact Tracing: How AI Helps Localities Reopen Safely and Researchers Find a Cure, 116th Cong., 2nd sess., July 8, 2020, at https://financialservices.house.gov/calendar/eventsingle.aspx?EventID=406731. 139 The House Committee on Financial Services established a Task Force on AI in May 2019, to examine issues including AI in financial services regulation, risk management, digital identification and combatting fraud, and reducing AI bias; see for example, U.S. Congress, House Committee on Financial Services, Task Force on AI, Equitable Algorithms: Examining Ways to Reduce AI Bias in Financial Services, 116th Cong., 2nd sess., Feb. 12, 2020, at https://financialservices.house.gov/calendar/eventsingle.aspx?EventID=406120; and U.S. Congress, House Committee on Financial Services, Task Force on AI, Equitable Algorithms: How Human-Centered AI Can Address Systemic Racism and Racial Justice in Housing and Financial Services, 117th Cong., 1st sess., May 7, 2021. 140 U.S. Congress, House Committee on Homeland Security, Subcommittee on Intelligence and Counterterrorism, Artificial Intelligence and Counterterrorism: Possibilities and Limitations, 116th Cong., 1st sess., June 25, 2019.

40

Laurie A. Harris

“accountable and inspectable” when agencies use them for decisionmaking about people.141

Selected Issues for Congressional Consideration Though specific AI technologies and application areas each have their own benefits, challenges, and policy issues, this section of the report will focus on some broad, crosscutting issues, with application-specific examples. The broad potential benefits of AI technologies include opportunities for speed of data analysis and insights into big datasets, such as identification of patterns; augmentation of human decisionmaking; performance optimization for complex tasks and systems; and improved safety for people in dangerous occupations. For example, AI systems can improve facilities operations and efficiency, providing cost savings. In one application of such benefits, DeepMind reported applying ML to Google data centers to make recommendations to reduce the amount of energy used for cooling by up to 40%, subsequently moving to autonomous operations.142 At the same time, there are challenges and pitfalls associated with deployment and use of AI systems. For example, AI systems may perpetuate or amplify bias (as described in the “Ethics, Bias, Fairness, and Transparency” section) and may not yet be able to fully explain their decisionmaking (sometimes referred to as the “black box” problem), which can be particularly problematic in high-stakes situations, for example when they inform health and safety decisions. To train and evaluate complex AI systems, researchers and developers may need large datasets that are not widely accessible. Further, stakeholders have questioned the adequacy of public and private sector workforces to develop and work with AI, as well as the adequacy of current laws and regulations in dealing with societal and ethical issues that may arise. In response to such overarching considerations, Congress might weigh the potential benefits of AI, such as increasing human safety, health, and productivity, with potential consequences, intended or otherwise, including Rep. Will Hurd and Rep. Robin Kelly, “Rise of the Machines: Artificial Intelligence and Its Growing Impact on U.S. Policy,” Subcommittee on Information Technology, Committee on Oversight and Government Reform, U.S. House of Representatives, September 2018. 142 DeepMind, “DeepMind AI Reduces Google Data Centre Cooling Bill by 40%,” July 20, 2016, at https://deepmind.com/blog/article/deepmind-ai-reduces-google-data-centre-coolingbill-40; and Google, “Safety-First AI for Autonomous Data Center Cooling and Industrial Control,” August 17, 2018, at https://www.blog.google/inside- google/infrastructure/safetyfirst-ai-autonomous-data-center-cooling-and-industrial-control/. 141

Artificial Intelligence

41

job displacement and biases in algorithmic decisionmaking, when considering potential AI funding, policies, and regulation. The passage of the National Artificial Intelligence Initiative Act of 2020 included provisions that directed federal government-wide activities and touched on many of the AI-associated issues raised in this report. Subsequently, Congress may decide that no additional legislative action is currently necessary, instead focusing in the near term on oversight of the implementation and effectiveness of the activities and programs directed by the act. This, along with activities begun in response to the aforementioned E.O.s, may provide better data and information for developing future legislation and congressional activities. Alternatively, given the rapid development of AI technologies and the wide range of sectors in which AI is deployed, Congress may decide that more actions are necessary to begin addressing issues surrounding AI use. Several major issues associated with the further development and use of AI and policy questions that Congress might consider are discussed below.

Implications for the U.S. Workforce Concerns about job losses resulting from technological advances are not new.143 Historically, advances in technology have had varied impacts on the labor market, with new technologies reducing demand for some skills and increasing demand for others.144 The rapid advance of AI technologies and their application in multiple sectors of the economy have increased fears about possible job losses and spurred academic and government interest in studying potential impacts. Meanwhile, this has also led to concern that too few workers have AI expertise, both to work with AI in their jobs and to conduct AI R&D. Thus, discussions of AI and the U.S. workforce largely focus on two main issues: (1) the potential impact of AI and AI-driven automation on workers, including job displacement and job shifts; and (2) whether the United States has enough AI experts (people with advanced degrees in AI who work or teach

For a historical perspective, see for example, David H. Autor, “Why Are There Still So Many Jobs? The History and Future of Workplace Automation,” Journal of Economic Perspectives, vol. 29, no. 3 (Summer 2015), pp. 3-30. 144 Executive Office of the President (EOP), Artificial Intelligence, Automation, and the Economy, December 2016, p. 11, at https://obamawhitehouse.archives.gov/sites/ whitehouse.gov/files/documents/Artificial-Intelligence-Automation- Economy.PDF. 143

42

Laurie A. Harris

in AI fields) for research, development, and application of AI across sectors, as well as teaching the next generation of AI experts.

Job Displacement and Skill Shifts Economists and researchers are divided on possible answers to the question of how many jobs will be lost, gained, or changed, due partly or wholly to the development and application of AI technologies. Some analysts may argue that AI-related technologies are unprecedented in their speed of development, their range of applications, and the number of jobs they threaten, while others may argue that technology has a long history of displacing labor yet simultaneously creating new jobs, any net loss would be negligible, and the factors affecting the pace and extent of automation and AI adoption have not changed.145 However, newly created jobs may be quite different from those eliminated and subsequently burden workers with the need to invest time, money, and relocation efforts in order to train for or acquire new jobs. A 2019 McKinsey Global Institute report that examined the impact of automation technologies on local economies and demographic groups stated, “While there could be positive net job growth at the national level, new jobs may not appear in the same places, and the occupational mix is changing. The challenge will be in addressing local mismatches and helping workers gain new skills.”146 The potential impacts of AI technologies on the number and types of jobs that are or will be available are challenging to measure and predict for a variety of reasons. 



First, definitions of AI and related technologies vary across industries, studies, and reports; further, potential job impacts from AI, computers, robots, and automation more generally are often conflated, making the specific workforce effects from AI technologies challenging to specify. Second, the numerous studies conducted to date vary in scope, including the labor sectors, populations, and countries assessed; the timeframes of predicted impacts; and the granularity of the datasets

For example, these perspectives are discussed in “Automation and Anxiety: Will Smarter Machines Cause Mass Unemployment,” The Economist, June 23, 2016, at https://www.economist.com/special-report/2016/06/23/automation- and-anxiety. 146 Susan Lund et al., The Future of Work in America: People and Places, Today and Tomorrow, McKinsey Global Institute, July 2019, available at https://www.mckinsey.com/featuredinsights/future-of-work/the-future-of-work-in-america-people-and-places-today-andtomorrow. 145

Artificial Intelligence



43

analyzed (e.g., whole occupations, specific tasks, or skillsets). One news article in 2018 attempted to compile all the available studies on how automation, AI, and robots could affect job losses or gains. The author summarized 19 studies that ranged in prediction dates (where specified) from 2016 to 2035, in jobs eliminated from 1.8 million to 1 billion, in jobs created from 1 million to 890 million, and in geographic focus from single countries (the United States or the United Kingdom) to worldwide. The author concluded that “there are about as many predictions as there are experts.”147 Further, many studies have relied on case studies and subjective assessments by experts.148 Third, AI technologies are rapidly evolving, and it is difficult to predict what specific tasks they might be used to automate in the future, even in the short term. Some experts have asserted that “there is no widely shared agreement on the tasks where ML systems excel, and thus little agreement on the specific expected impacts on the workforce and on the economy more broadly.”149 And while AI is predicted to have greater displacement effects on higher skill professional and technical workers than earlier waves of automation, robust measures of current and future effects are still in development.150

While many reports and news stories related to job automation focus on worker displacement, some companies report using AI-enabled automation to perform jobs that are “dirty, dull, and dangerous,” such as sorting at recycling facilities,151 or to make up for labor shortages in the tight labor market. For Erin Winick, “Every Study We Could Find on What Automation Will Do to Jobs, in One Chart,” MIT Technology Review, January 25, 2018, at https://www.technology review.com/s/610005/every-study-we-could-find-on-what-automation-will-do-to-jobs-inone-chart/. 148 Mark Muro, Jacob Whiton, and Robert Maxim, What Jobs Are Affected by AI? Better-Paid, Better-Educated Workers Face the Most Exposure, Brookings, November 2019, at https://www.brookings.edu/wp-content/uploads/ 2019/11/2019.11.20_BrookingsMetro_What-jobs-are-affected-by-AI_Report_MuroWhiton-Maxim.pdf. 149 Erik Brynjolfsson and Tom Mitchell, “What Can Machine Learning Do? Workforce Implications,” Science, vol. 358, no. 6370 (2017), pp. 1530-1534. 150 Michael Webb, “The Impact of Artificial Intelligence on the Labor Market,” Stanford University Working Paper, July 2019. 151 Bryn Nelson, “How Robots Are Reshaping One of the Dirtiest, Most Dangerous Jobs,” NBC News, April 17, 2018, at https://www.nbcnews.com/mach/science/how-robots-arereshaping-one-dirtiest-most-dangerous-jobs-ncna866771. 147

44

Laurie A. Harris

example, some agriculture companies report developing autonomous systems to help make up for a shortage of farm workers.152 Other companies making use of automation still report a high demand for employees. For example, Amazon reportedly expanded its workforce by 300,000 people since acquiring robotics company Kiva and deploying its robots in 2012 in its distribution centers. An employee overseeing robotics work at Amazon stated that “the biggest problem is not having enough people, and I don’t think that is going to change.”153 While many studies over the past few years have discussed AI as part of automation technologies more broadly, some have begun trying to assess the AI- and ML-specific portions of potential impacts. Prior analyses looking more broadly at automation of job skills have generally found that lowerwage, blue-collar workers will be more affected. However, one 2018 study concluded that although most occupations have some tasks that could be automated using ML, there are few, if any, where all tasks are suitable for automation.154 A 2019 study looking at AI-specific technologies found that (1) higher-wage, white-collar occupations and some agriculture and manufacturing positions may be the most exposed to AI disruptions; (2) AI seems likely to affect men, prime-age workers, and white and Asian American workers; and (3) large metropolitan areas with a concentration of high-tech industries and communities heavily involved in manufacturing are likely to experience the most AI-related disruption.155 The authors caveat their work by noting that studies examining employment effects with any nuance are preliminary and that “the onset of AI will introduce new riddles into speculation about the future of work.”156 In general, recent studies indicate that most if not all occupations will be impacted by the introduction of AI and AI-enabled technologies in some way.

Erin Winick, “New Autonomous Farm Wants to Produce Food Without Human Workers,” MIT Technology Review, October 3, 2018, at https://www.technologyreview. com/s/ 612230/new-autonomous-farm-wants-to-produce-food- without-human-workers/. 153 Cade Metz, “FedEx Follows Amazon into the Robotic Future,” New York Times, March 18, 2018, at https://www.nytimes.com/2018/03/18/technology/fedex-robots.html. 154 Erik Brynjolfsson, Tom Mitchell, and Daniel Rock, “What Can Machines Learn and What Does It Mean for Occupations and the Economy?,” AEA Papers and Proceedings, vol. 108 (May 2018), pp. 43-47, at https://pubs.aeaweb.org/doi/pdfplus/10.1257/pandp.20181019. 155 Mark Muro, Jacob Whiton, and Robert Maxim, What Jobs Are Affected by AI? Better-Paid, Better-Educated Workers Face the Most Exposure, Brookings, November 2019, at https://www.brookings.edu/wp-content/uploads/2019/11/2019.11.20_BrookingsMetro_ What-jobs-are-affected-by-AI_Report_Muro-Whiton-Maxim.pdf. 156 Ibid., p. 22. 152

Artificial Intelligence

45

A 2020 report from the MIT Task Force on the Work of the Future asserted that the “momentous impacts of technological change are unfolding gradually,” and that while applications and impacts from AI and robotics applications are coming, “they are not as close as some would fear.”157 The report discusses a variety of factors informing these findings, including that AI systems are still narrow and that policies, organizational cultures, economic incentives, and management practices can shape “the rate and manner in which firms develop and adopt technologies” beyond what is technologically possible.158

AI Expert Workforce Tied to considerations of U.S. competitiveness, policymakers and stakeholders in academia and technology companies have expressed concerns about a lack of adequate AI expertise, not only for AI R&D and education in industry and academia, but also in the federal and congressional workforces. A September 2019 report highlighted several indicators of a tight market for AI talent, though the authors caveated their findings, noting that there is broad consensus in the field that talent shortages are substantial, but the exact extent is difficult to measure, and different organizations may publish very different estimates:159 

157

Job site statistics show that demand for workers far exceeds supply. For example, based on data from Burning Glass Technologies, job listings for AI skills have “grown significantly” from 2013 to 2020, with the total number of AI jobs posted in the United States above 300,000 in 2019 and 2020.160 And as reported in April 2019, the market intelligence firm Element AI estimated that, in the United

David Autor, David Mindell, and Elisabeth Reynolds, The Work of the Future: Building Better Jobs in an Age of Intelligent Machines, Massachusetts Institute of Technology (MIT) Task Force on the Work of the Future, November 2020, pp. 5, 32-34, at https://workofthefuture.mit.edu/wp-content/uploads/2021/01/2020-Final-Report4.pdf. 158 Ibid. 159 Remco Zwetsloot, Roxanne Heston, and Zachary Arnold, Strengthening the U.S. AI Workforce, Center for Security and Emerging Technology, Georgetown University, September 2019, pp. 9-10. See the callout box, “What is the ‘AI workforce,’ and who counts as an ‘AI expert’?”, p. 3, for additional discussions of measuring the AI expert workforce. 160 AI Index 2021, p. 86.

46

Laurie A. Harris





States, there were around 144,000 AI-related job openings and only about 26,000 developers and specialists seeking work.161 The private sector is paying high salaries for workers with AI skills. For example, a 2018 news report stated that “even newly-minted Ph.D.s in machine learning and data science can make more than $300,000” at technology companies such as Google, Facebook, and Apple.162 Subjective assessments from employers align with the indicators. For example, among firms surveyed by the World Economic Forum in 2020, most of which reported a desire to invest in AI, “skills gaps” and “inability to attract specialized talent” ranked among the top two barriers to the adoption of new technologies, especially when hiring for “emerging roles,” including AI and ML specialists.163

Perhaps for this reason, some companies such as Google, Amazon, and Facebook, are recruiting professors while allowing them to retain positions at universities.164 However, the details of these arrangements are important, as Oren Etzioni of the Allen Institute for Artificial Intelligence notes in an example from Facebook: “What are the ethics of a major corporation suddenly going after the entire [natural language processing] faculty in a computer science department? I believe their original offers had the faculty members spending 80 percent of their time at Facebook, which would not allow them time to carry out their educational responsibilities at [the University of Washington].” Some have referred to this as eating the seed corn, which could lead to less capacity to train future AI experts. Facebook disputed the claim, noting that while the relationship between academia and industry may be changing, the company is trying to be careful about not draining

As reported in Roberta Kwok, “Junior AI Researchers Are in Demand by Universities and Industry,” Nature, April 23, 2019, at https://www.nature.com/articles/d41586-019-01248w. 162 Jeremy Kahn, “Sky-High Salaries Are the Weapons in the AI Talent War,” Bloomberg, February 13, 2018, at https://www.bloomberg.com/news/articles/2018-02-13/in-the-war-forai-talent-sky-high-salaries-are-the-weapons. 163 World Economic Forum, Center for the New Economy and Society, The Future of Jobs Report 2020, October 2020, pp. 27 and 35, at http://www3.weforum.org/docs/WEF_ Future_of_Jobs_2020.pdf. 164 Daniela Hernandez and Rachael King, “Universities’ AI Talent Poached by Tech Giants,” Wall Street Journal, November 24, 2016, at https://www.wsj.com/articles/universities-aitalent-poached-by-tech-giants-1479999601. 161

Artificial Intelligence

47

universities.165 However, in a March 2019 survey of 111 AI researchers and university administrators by Times Higher Education and Microsoft, 89% said that it was “difficult” or “very difficult” to hire and retain AI experts.166 Other companies are collaborating with universities, such as Google’s partnership with Princeton University to open an AI laboratory that will engage faculty members, graduate and undergraduate students, recent graduates, and software engineers. One of the collaborating faculty members, who previously split time between Princeton and Google, noted that it was an opportunity for those at Princeton to “benefit from exposure to real-world computing problems, and for Google to benefit from long-term, unconstrained academic research that Google may incorporate into future products.”167 Within the federal government, the Government by Algorithm: Artificial Intelligence in Federal Administrative Agencies report asserted that “if we expect agencies to make responsible and smart use of AI, technical capacity must come from within” and “in-house expertise promotes AI tools that are better tailored to complex governance tasks and more likely to be designed and implemented in lawful, policy-compliant, and accountable ways.”168 To gain such expertise, the report states that “fully leveraging agency use of AI will require significant public investment to draw needed human capital.”169 Further, E.O. 13960 states that “agencies shall provide appropriate training to all agency personnel responsible for the design, development, acquisition, and use of AI.” However, the March 2021 final report of the National Security Commission on Artificial Intelligence (NSCAI) states, “The human talent deficit is the government’s most conspicuous AI deficit and the single greatest inhibitor to buying, building, and fielding AI- enabled technologies for national security purposes.”170

Alan Boyle, “FAIR Competition? Facebook Creates Official AI Labs in Seattle and Pittsburgh, Vying for Top Talent,” GeekWire, May 5, 2018, at https://www.geekwire. com/2018/fair-competition-facebook-raises-status-ai- research-labs-seattle-pittsburgh/. 166 As reported in Roberta Kwok, “Junior AI Researchers Are in Demand By Universities and Industry,” Nature, April 23, 2019, at https://www.nature.com/articles/d41586-019-01248w. 167 Steven Schultz, “Google to Open Artificial Intelligence Lab in Princeton and Collaborate with University Researchers,” Princeton University news communication, December 18, 2018, at https://www.princeton.edu/news/ 2018/12/18/google-open-artificial-intelligence-labprinceton-and-collaborate-university. 168 ACUS report 2020, p. 7. 169 Ibid. 170 National Security Commission on Artificial Intelligence, Final Report, March 2021, p. 3, at https://www.nscai.gov/wp-content/uploads/2021/03/Full-Report-Digital-1.pdf (hereinafter, “NSCAI 2021 Final Report”). 165

48

Laurie A. Harris

Policy Considerations. Studies that attempt to identify the workforce effects of AI and ML technologies specifically, rather than those that address automation generally, conclude that there has been insufficient data collection and analyses specific to AI technologies and job skills conducted to fully understand the issue and inform policy decisions. For example, one study identified barriers that inhibit researchers from measuring the labor effects of AI, including (1) lack of high-quality data about the nature of work; (2) lack of empirically informed models of key microlevel processes (e.g., skill substitution and human-machine complementarity); and (3) insufficient understanding of how cognitive technologies interact with broader economic dynamics and institutional mechanisms.171 The study asserted that overcoming such barriers requires improvements in the longitudinal and spatial resolution of data and refinements to data on workplace skills.172 Another study, commissioned by the Bureau of Labor Statistics (BLS) to identify constructs that would complement BLS’s existing products to assess the impact of automation on labor outcomes, echoed these findings. The BLScommissioned study by Gallup states that “the primary lesson learned from [the] report is that researchers and, by extension, policymakers lack the data necessary to fully understand how new technologies impact the labor market” and identified gaps in BLS data products, specifically with regards to the classification of skills, task performance, and the adoption of new technologies.173 Some experts emphasize training people for skills and jobs that will be in high demand even with implementation of AI technologies, such as skills needed in management and personal interactions, two areas for which AI is not well suited.174 Stakeholders have also asserted that a focus on lifelong learning and programs to retrain and upskill workers will be important for Morgan R. Frank et al., “Toward Understanding the Impact of Artificial Intelligence on Labor,” Proceedings of the National Academy of Sciences of the United States of America, vol. 116, no. 14 (April 2, 2019), pp. 6531-6539. 172 Ibid. 173 Jenny Marler, Gallup Project Director, Assessing the Impact of New Technologies on the Labor Market: Key Constructs, Gaps, and Data Collection Strategies for the Bureau of Labor Statistics, Contract No: GS-00F-0078M, February 7, 2020, pp. 3, 25 (hereinafter referred to as the Gallup study), at https://www.bls.gov/bls/congressional-reports/assessingthe-impact-of-new-technologies-on-the-labor-market.pdf. 174 David Rotman, “Obama Economist: We’re Not Preparing Workers for Changing Jobs,” MIT Technology Review, June 4, 2018, at https://www.technologyreview.com/s/611297/obamaeconomist-were-not-preparing-workers-for- changing-jobs/; video of Jason Furman’s talk at the 2018 EmTech conference, covered in the article, can be found at https://events.technologyreview.com/video/watch/jason-furman-harvard-automation-futurework/. 171

Artificial Intelligence

49

addressing skill shifts related to deployment of AI technologies.175 In one 2017 survey of 300 C- suite and senior executives about their AI strategies, 82% of leaders planned to implement AI in the next three years, but only 38% provided programs aimed at reskilling employees to work with the technology.176 Still other experts assert that “the concern should not be about the number of jobs, but whether those are jobs that can support a reasonable standard of living and what set of people have access to them.”177 In response to these issues, some policy questions and considerations for Congress may include the following: 





175

176

177

What types of granular labor data are needed to better inform analyses and identify key skills for future jobs, and how might the federal government help gather and disseminate such information? In conjunction with efforts by employers and educators, what is the appropriate role of the federal government in supporting the reskilling or upskilling of employees for whom certain tasks or their entire jobs will be shifted or displaced? Are federal programs to assist workers sufficient to help address potential workforce shifts? How can federal direction of workforce support programs balance providing AIspecific legislative direction while allowing states and localities flexibility to meet their specific workforce needs? For those federal offices and agencies facing a shortage of technical expertise in AI, what are the best options to attract and retain talent? For example, former Secretary of Defense Robert Work has argued for the development of an AI training corps—similar to the

James Manyika et al., A Future That Works: Automation, Employment, and Productivity, McKinsey Global Institute, January 2017, available at https://www.mckinsey.com/ featured-insights/digital-disruption/harnessing-automation-for-a-future-that-works; and Joseph E. Aoun, Robot-Proof: Higher Education in the Age of Artificial Intelligence (Cambridge, MA: MIT Press, 2017). Generally, reskilling refers to learning new skills for a different job or occupation, while upskilling refers to learning new skills for growth within an existing job or occupation. Genpact, “Is Your Business AI-Ready?,” 2017, at http://www.genpact.com/downloadablecontent/insight/is-your- business-ai-ready.pdf. David Autor, “No, Robots Won’t Take All the Jobs,” Brookings Creative Lab, March 12, 2018, at https://www.youtube.com/watch?v=SrprBJf7Nd4 (video discussion of the paper, David Autor and Anna Salomons, “Is Automation Labor-Displacing? Productivity Growth, Employment, and the Labor Share,” Brookings Papers on Economic Activity, Spring 2018, at https://www.brookings.edu/bpea-articles/is-automation-labor-displacing- productivitygrowth-employment-and-the-labor-share/).

50

Laurie A. Harris



CyberCorps program178 (educational training in exchange for expert work for the federal government, but where workers could keep their regular jobs).179 In addition to developing internal expertise, how might federal agencies and executive offices expand access to outside expertise, as from academia, industry, and nonprofit groups? For example, the NSCAI 2021 Final Report recommends establishing a civilian National Reserve Digital Corps modeled after the military reserve’s commitment and incentive structure.180

In order to address the dearth of data on the potential impacts of AI on the workforce, Congress may consider various actions. The FY2021 NDAA calls for the commissioning of a study by the National Academies of Sciences, Engineering, and Medicine on the current and future impact of AI on the workforce of the United States across sectors, including addressing research gaps and data needed to better understand workforce impacts. The study may yield useful information to inform the debate and future policy options; the final report is due more than two years from enactment, which occurred in January 2021. During that time, Congress may hold hearings to obtain related information on new or updated data collection and research at federal agencies in response to prior studies. Further, Congress may direct federal agencies to begin collecting additional information to fill data gaps identified in prior research, such as in the Gallup study for BLS. Should Congress decide to assist federal agencies in attracting outside expertise and developing internal expertise in AI, a variety of policy responses have been discussed by stakeholders. For example, Congress may consider directing federal agencies to develop or expand on scholarship- for-service (SFS) programs to attract new AI talent to federal service. However, simply expanding the number of offerings may not result in more students participating—such programs have been criticized for being difficult to find online, being spread across multiple and possibly outdated agency websites, and not supporting continued professional development once a student is

178

See for example, the CyberCorps Scholarship for Service program at https://www.sfs.opm. gov/. 179 David Ignatius, “China’s Application of AI Should Be a Sputnik Moment for the U.S. but Will It Be?,” Washington Post, November 6, 2018, at https://www. washingtonpost.com/ opinions/chinas-application-of-ai-should-be-a-sputnik-moment-for-the-us-but-will-itbe/2018/11/06/69132de4-e204-11e8-b759-3d88a5ce9e19_story.html. 180 NSCAI 2021 Final Report, p. 125.

Artificial Intelligence

51

employed in the federal government.181 While SFS programs have had reportedly high placement rates for graduates—94% for CyberCorps graduates in 2016—some critics have expressed discomfort with the repayment requirements for students who enter the program but leave before completing their degree or federal service requirement.182 Further challenges for growing a federal workforce in AI include higher salaries for comparable jobs in the private sector and time- consuming and opaque hiring practices. Thus, Congress may consider directing agencies to take actions to improve the recruitment and retention of AI experts, including through the establishment or modification of federal programs such as SFS. Developing internal expertise at agencies to not only develop, but use, understandable and transparent AI systems may have multiple benefits for agencies. For example, agency experts likely have a deeper, more nuanced understanding of the technical needs and challenges at their agency for which an AI system is developed or tailored. Further, by developing their own AI systems, agencies may be better able to create understandable, transparent, and accountable systems, in contrast to the estimated 33% of federal AI systems that are built by external contractors using proprietary software and obtained through the federal procurement process.183 Congress may consider ways to support or augment AI expertise within the existing federal workforce through the establishment of federal advisory committees and directing agencies to develop internal training programs.

International Competition and Federal Investment in AI R&D According to the National Science Board’s Science and Engineering Indicators for 2020, the United States and China lead in research and commercialization of AI technologies, though business adoption of AI is

Cindy Martinez, “Saving the Federal Cyber and AI Workforce from Obsolescence: How to Attract and Retain a New Generation,” FedScoop, December 22, 2020, at https://www.fedscoop.com/saving-federal-cyber-ai-workforce- obsolescence-attract-retainnew-generation/. 182 For additional discussions of SFS programs and the federal workforce in the context of cybersecurity, see CRS In Focus IF10654, Challenges in Cybersecurity Education and Workforce Development, by Boris Granovskiy. 183 Administrative Conference of the United States, Office of the Chairman Projects, “Artificial Intelligence in Federal Agencies,” February 2020, p. 88-98. 181

52

Laurie A. Harris

occurring across the world.184 Numerous international governments have initiated activities focused on AI (e.g., task forces, research activities, discussion papers), and dozens have released national AI strategies, though these vary in scope.185 Further, multiple countries are cooperating in international AI initiatives. For example, the United States and other Organisation for Economic Co-operation and Development (OECD) member countries committed to common AI principles in May 2019.186 Building on the commitment to these principles, the United States and 14 other countries launched the Global Partnership on AI in June 2020 to bring together expertise from a range of stakeholders “with the goal of bridging the gap between the theory and practice of AI.”187 In September 2020, the United States and the United Kingdom signed a declaration of cooperation in AI R&D.188 Public investments in AI R&D vary widely by country. In the United States, as previously noted, FY2020 funding for AI activities at defense and non-defense agencies was approximately $4 billion and $1.1 billion, respectively. In comparison, a recent report from the Center for Security and Emerging Technology at Georgetown University estimated that Chinese government spending on AI R&D in 2018 was on the order of a few billion dollars.189 Though a substantial amount, this is less than the estimate of tens of billions that others have suggested. The European Union previously communicated a commitment to increase investments from $500 million to $1.5 billion by the end of 2020. In 2018, Germany and France pledged €3 National Science Board, National Science Foundation, “Production and Trade of Knowledgeand Technology- Intensive Industries,” Science and Engineering Indicators 2020, NSB2020-5, p. 55, at https://ncses.nsf.gov/pubs/nsb20205/. 185 One of the most comprehensive efforts to compile information on AI initiatives across countries has been conducted through the Organisation for Economic Co-operation and Development’s (OECD’s) AI Policy Observatory, at https://oecd.ai/. 186 OECD, Recommendation of the Council on Artificial Intelligence, adopted on May 21, 2019, at https://legalinstruments.oecd.org/en/instruments/OECD-LEGAL-0449. 187 National Artificial Intelligence Initiative Office, “Global Partnership on AI,” at https://www.ai.gov/strategic-pillars/ international-cooperation/#Global-Partnership-on-AI. For additional information about the Global Partnership on Artificial Intelligence, see https://gpai.ai/. 188 U.S. Department of State, “Declaration of the United States of America and the United Kingdom of Great Britain and Northern Ireland on Cooperation in Artificial Intelligence Research and Development: A Shared Vision for Driving Technological Breakthroughs in Artificial Intelligence,” September 25, 2020, at https://www.state.gov/declaration-of- theunited-states-of-america-and-the-united-kingdom-of-great-britain-and-northern-ireland-oncooperation-in-artificial-intelligence-research-and-development-a-shared-vision-for-driving/. 189 Ashwin Acharya and Zachary Arnold, Chinese Public AI R&D Spending: Provisional Findings, Center for Security and Emerging Technology, Georgetown University, December 2019. 184

Artificial Intelligence

53

billion and €1.5 billion, respectively, for AI investments by the end of 2020, and Canada previously committed to spending $125 million over five years.190 Policy Considerations. The appropriate level for U.S. federal R&D support, the nature of the R&D investments, such as basic versus applied research, as well as the most effective additional mechanisms to support innovation, such as prize competition incentives and public-private partnerships, remain areas of discussion among lawmakers. Historical considerations of international competition in science and technology have led to prior recommendations for increased federal funding of research, particularly in the physical sciences and engineering (PS&E).191 For example, the America COMPETES Act (P.L. 110-69) in 2007 and the America COMPETES Reauthorization Act of 2010 (P.L. 111-358) were originally enacted to address concerns that the United States could lose its advantage in scientific and technological innovation. The COMPETES Acts included authorizations of appropriations in line with doubling research in PS&E, including doubling NSF’s budget. Appropriations for the COMPETES Acts activities never reached authorized levels, and opposition to the efforts included various perspectives, including a preference for alternative federal approaches to support innovation, such as research tax credits or reducing regulatory costs, as well as a concern about the national debt.192 More recently, regarding federal funding and support for AI R&D, some stakeholders assert that the federal government should invest more money and direct structural or programmatic changes to certain R&D agencies to promote U.S. technological primacy, particularly in key areas of emerging technologies such as AI. For example, the President’s Council of Advisors on Science and Technology (PCAST) released recommendations in June 2020 on strengthening U.S. American Leadership in industries of the future, which included growing federal investment in AI R&D by a factor of 10 over 10 years

190

Information on the status of these investments is unknown. As previously noted, it is important to keep in mind that reliable cross-country measures on public investments are difficult to obtain for a variety of reasons, including varying levels of reporting, and the range of measurements that countries could use to tally spending. 191 National Academies of Sciences, Engineering, and Medicine, Rising Above the Gathering Storm: Energizing and Employing America for a Brighter Economic Future, 2007, at https://doi.org/10.17226/11463. 192 For additional discussions of the America COMPETES Acts and efforts to double federal PS&E funding, see CRS Report R41951, An Analysis of Efforts to Double Federal Funding for Physical Sciences and Engineering Research, by John F. Sargent Jr.

54

Laurie A. Harris

(e.g., increase nondefense R&D from $1 billion in FY2020 to $10 billion in FY2030).193 The National AI Initiative Act, passed in the FY2021 NDAA, authorized appropriations for AI activities at NSF, NIST, and DOE for FY2021-FY2025. In the 117th Congress, the Endless Frontier Act (S. 1260) would redesignate the NSF as the National Science and Technology Foundation, establishing a Directorate for Technology and authorizing appropriations of $100 billion over five years for the new directorate.194 The final report of the National Security Commission on AI recommends scaling and coordinating federal AI R&D funding, including through establishing a National Technology Foundation as a sister agency to the NSF “to provide the means to move science more aggressively into engineering and scale innovative ideas into reality”; funding AI R&D at compounding levels; and establishing additional National AI Research Institutes.195 Congress considers the appropriations for these authorities as part of its annual discretionary appropriations process and enacted amounts may or may not match the authorized levels. An additional consideration, given the R&D engagement in the private sector, is the extent to which the federal government might leverage private funding through expanding public-private partnerships. In the 2019 update to the National AI R&D Strategic Plan, expanding public-private partnerships to accelerate advances in AI was a new, additional strategy.

Standards Development AI standards development became an area of increasing interest for the Trump Administration and the 116th Congress, for both domestic R&D and international competitiveness reasons. The 2019 National AI R&D Strategic Plan noted that “development and adoption of best practices and standards in documenting dataset and model provenance will enhance trustworthiness and responsible use of AI technologies.”196 E.O. 13859 aimed to “Ensure that technical standards … reflect Federal priorities for innovation, public trust, President’s Council of Advisors on Science and Technology (PCAST), Recommendations for Strengthening American Leadership in Industries of the Future, June 2020, p. 6, at https://science.osti.gov/-/media/_/pdf/about/pcast/202006/PCAST_June_2020_ Report.pdf?la=en&hash=019A4F17C79FDEE5005C51D3D6CAC81FB31E3ABC. 194 For comparison, FY2021 appropriations for NSF were approximately $8.5 billion total. The Endless Frontier Act was first introduced in the 116th Congress (S. 3832 and H.R. 6978). 195 NSCAI 2021 Final Report, p. 435. 196 NSTC Select Committee on Artificial Intelligence 2019 AI R&D Strategic Plan, p. 28. 193

Artificial Intelligence

55

and public confidence in systems that use AI technologies … and develop international standards to promote and protect those priorities.” In response, NIST produced the Plan for Federal Engagement in Developing Technical Standards and Related Tools (AI Standards Plan) in August 2019. The plan identifies nine areas of focus for AI standards: concepts and terminology; data and knowledge; human interactions; metrics; networking; performance testing and reporting methodology; safety; risk management; and trustworthiness.197 The standards development process in the United States is predominantly a voluntary, consensus- based effort, driven by the private sector, including through Standards Development Organizations (SDOs). NIST (with other federal agencies, as appropriate) is a participant and facilitator, providing agency requirements to standards projects and technical expertise to standards development, incorporating voluntary standards into policies and regulations, and citing standards in agency procurements.198 Standards can be horizontal (i.e., used across many applications and industries), or vertical (i.e., developed for specific application areas such as healthcare or transportation). Further, nontechnical standards can be important to inform policy and human decisionmaking (e.g., standards for governance and privacy), and “standards should be complemented by an array of related tools,” such as standardized datasets with metadata; benchmarks; testing methodologies; metrics; testbeds; and tools for accountability and auditing.199 The AI Standards Plan notes that “While there is broad agreement that [federal policies and principles, including those that address societal and ethical issues, governance, and privacy] must factor into AI standards, it is not clear how that should be done and whether there is yet sufficient scientific and technical basis to develop those standards provisions.”200 Standards development is not only a national but an international effort, involving the work of such entities as the International Organization for

197

National Institute of Standards and Technology, U.S. Leadership in AI: A Plan for Federal Engagement in Developing Technical Standards and Related Tools, August 9, 2019, pp. 3, 10-12. The plan further states that “Trustworthiness standards include guidance and requirements for accuracy, explainability, resiliency, safety, reliability, objectivity, and security.” 198 Ibid., “How Are Technical Standards Developed?” p. 9. The document also includes a list of SDOs that are developing AI standards in Appendix II. For additional information about NIST, including certain statutory authorities, see CRS Report R43908, The National Institute of Standards and Technology: An Appropriations Overview, by John F. Sargent Jr. 199 Ibid., pp. 13-14. 200 Ibid., p. 4. Social and ethical issues are discussed in the following section, “Ethics, Bias, Fairness, and Transparency.”

56

Laurie A. Harris

Standardization (ISO).201 The U.S. government and other stakeholders have expressed concern about China’s attempts to lead the international AI standards development efforts. China has already laid out some of these plans in white papers and is expected to release a 15-year plan to set global standards for next-generation technologies, including AI, as part of its “China Standards 2035” plan.202 Concerns about China’s focus on standards setting, particularly if the United States does not lead in these efforts, include the following. 



201

Potential economic losses. The NIST AI Standards Plan highlights this concern, stating, “AI standards developed without the appropriate level and type of involvement may exclude or disadvantage U.S.based companies in the marketplace as well as U.S. government agencies.”203 Threats to democratic norms and values. Members of the National Security Commission on AI have expressed concern that “AI is being used in ways that are antithetical to American values. In China, AI is used as a tool for centralizing power at the expense of individual rights. The Chinese government is amassing the personal data of its people, using facial recognition software to stifle dissent and repress minorities, and exporting its surveillance technology abroad.”204 The ability of those countries leading in international standards setting to impart their societal and cultural values, such as data privacy and respect for civil liberties, into the process and outcomes, has led to concerns about China’s successes in increasing its leadership positions in international standards-making bodies.205 As NIST has stated, “standards flow from principles, and a first step toward

Information about the ISO Committee on AI can be found at https://www.iso.org/ committee/6794475.html. 202 The Center for Security and Emerging Technology (CSET) at Georgetown University has provided a translation of China’s Artificial Intelligence Security Standardization White Paper, 2019, at https://cset.georgetown.edu/wp-content/ uploads/t0121_AI_security_ standardization_white_paper_EN.pdf; regarding the forthcoming “China Standards 2035,” see Arjun Kharpal, “Power Is ‘Up for Grabs’: Behind China’s Plan to Shape the Future of Next-Generation Tech,” CNBC, April 26, 2020, at https://www.cnbc.com/2020/04/27/ china-standards-2035-explained.html. 203 NIST, U.S. Leadership in AI: A Plan for Federal Engagement in Developing Technical Standards and Related Tools, p. 19. 204 Eric Schmidt and Bob Work, “The US Is in Danger of Losing Its Global Leadership in AI,” The Hill, December 5, 2019, at https://thehill.com/blogs/congress-blog/technology/473273the-us-is-in-danger-of-losing-its-global-leadership- in-ai. 205 U.S.-China Economic and Security Review Commission, 2020 Annual Report to Congress, December 2020, p. 107, at https://www.uscc.gov/files/001592.

Artificial Intelligence

57

standardization will be reaching broad consensus on a core set of AI principles.”206 These points are discussed in greater detail in the U.S.-China Economic and Security Review Commission’s 2020 annual report to Congress, which states In contrast to the United States, where technical standards are developed by industry in response to commercial need and adopted by consensus, Chinese state agencies formulate standards and use them to advance industrial and foreign policy objectives. Historically, Beijing has prioritized developing mandatory and unique domestic technical standards as a barrier to foreign firms’ market entry and to help grow domestic industry. Now, it is also coordinating industrial policy and diplomatic strategy to expand its influence in international standardsmaking bodies, both to increase adoption of Chinese technology abroad and to influence norms for how technology is applied. 207

Policy Considerations. Such concerns have generated various recommendations for robust domestic and international standards setting efforts. The AI Standards Plan included numerous recommendations to support U.S. leadership in AI standards development:





206

207

Bolster AI standards-related knowledge, leadership, and coordination among federal agencies, including by  designating a Standards Coordinator within the NSTC’s MLAI Subcommittee, and  developing clear career development and promotion paths that encourage participation and expertise in AI standards and development. Promote focused research to advance and accelerate broader exploration and understanding of how aspects of trustworthiness can be practically incorporated within standards and standards-related tools, including through supporting research to develop metrics, data sets, and risk management strategies for AI.

NIST, U.S. Leadership in AI: A Plan for Federal Engagement in Developing Technical Standards and Related Tools, p. 15. U.S.-China Economic and Security Review Commission, 2020 Annual Report to Congress, December 2020, p. 106, at https://www.uscc.gov/files/001592.

58

Laurie A. Harris

 

Support and expand public-private partnerships to develop and use AI standards and related tools to advance reliable, robust, and trustworthy AI. Strategically engage with international parties to advance AI standards for U.S. economic and national security needs, including through accelerating information exchange with “like minded countries” through international partnerships.208

In the FY2021 NDAA (Section 5301), Congress established as a mission that NIST advance collaborative frameworks, standards, guidelines; authorized NIST to work on associated methods and techniques for AI; and directed that NIST support the development of a risk-management framework for trustworthy AI systems. NIST is further directed to develop guidance and best practices for data set documentation and data sharing among industry, federally funded research and development centers, and federal agencies, including options for partnerships with universities and nonprofits. Congress may consider oversight activities to monitor the implementation of these provisions and provide subsequent direction to NIST and other federal agencies.

Ethics, Bias, Fairness, and Transparency Along with interest in technical advances, researchers, companies, and policymakers are expressing growing concern and interest in what has been called the ethical evolution of AI, including questions about bias, fairness, and algorithm transparency. Broadly, who defines ethics and who enforces ethical design and use?209 What constitutes an ethical decision may vary by individual, culture, economics, and geography.210 As some analysts have asserted, “AI is only as good as the information and values of the programmers

208

NIST, U.S. Leadership in AI: A Plan for Federal Engagement in Developing Technical Standards and Related Tools, pp. 4-6. 209 Karen Hao, “Establishing an AI Code of Ethics Will Be Harder Than People Think,” MIT Technology Review, October 21, 2018, at https://www.technologyreview.com/2018/ 10/21/139647/establishing-an-ai-code-of-ethics-will-be- harder-than-people-think/. 210 Edmond Awad et al., “The Moral Machine Experiment,” Nature, October 24, 2018.

Artificial Intelligence

59

who design it, and their biases can ultimately lead to both flaws in the technology and amplified biases in the real world.”211 Just as there are many ways of considering what is ethical in AI, “researchers studying bias in algorithms say there are many ways of defining fairness, which are sometimes contradictory,” having inherent tradeoffs.212 (For example, one computer scientist presented at the 2018 Fairness, Accountability, and Transparency (FAT*) Conference on “21 fairness definitions and their politics.”213) The box below presents an example of the challenges of defining fairness in the criminal justice system. For some, such cases highlight the need for agencies to improve their internal processes for assessing algorithmic tools and develop training for their staff to be able not only to evaluate such tools, but also to provide developers with publiclyavailable metrics for fairness.214 Sector Example: Defining Fairness in Criminal Justice In 2016, a team at ProPublica investigated proprietary software called COMPAS that is used during sentencing to assign defendants in the criminal justice system with risk scores, from 1 to 10, for committing another crime within two years if released (i.e., the likelihood of recidivism). The ProPublica team claimed that the algorithm was biased, because there were a disproportionate number of false positives for black defendants—people identified as high risk who were not subsequently charged with another crime (one measure of an “error rate”).215 The developers countered that the algorithm was not biased, because it was equally good at predicting whether a white or a black defendant classified as high risk would reoffend, a measure called “predictive parity.” In other words, ProPublica and the developers of

Andre M. Perry and Nicol Turner Lee, “AI Is Coming to Schools, and If We’re Not Careful, So Will Its Biases,” Brookings, September 26, 2019, at https://www.brookings.edu/blog/ the-avenue/2019/09/26/ai-is-coming-to-schools-and-if-were-not-careful-so-will-its-biases. 212 Rachel Courtland, “Bias Detectives: The Researchers Striving to Make Algorithms Fair,” Nature News Feature, vol. 558 (June 20, 2018), pp. 357-360 (hereinafter, “Courtland, 2018”). 213 Arvind Narayanan, “Translation Tutorial: 21 Fairness Definitions and Their Politics,” Fairness, Accountability, and Transparency (FAT*) Conference, February 23, 2018; abstract and video available at https://www.youtube.com/watch? v=jIXIuYdnyyk. As noted on the conference website (https://facctconference.org/2018/program.html), “In 2018, the conference’s name was FAT* and the proceedings were published in the Journal of Machine Learning Research. The conference affiliated with ACM in 2019, and changed its name to ACM FAccT immediately following the 2020 conference.” 214 Courtland, 2018. 215 Julia Angwin et al., “Machine Bias,” ProPublica, May 23, 2016, at https://www.propublica. org/article/machine-bias-risk-assessments-in-criminal-sentencing. 211

60

Laurie A. Harris

COMPAS were using different measures to try to conclude whether the software was fair. Subsequent research into these analyses found that not all criteria for fairness can be satisfied when recidivism prevalence differs across groups and that disparate impact—which the researcher defined as referring “to settings where a penalty policy has unintended disproportionate adverse impact on a particular group”—may result even if a prediction instrument is fair with respect to certain criteria.216 The researcher—citing a large body of literature showing that data-driven risk assessment instruments tend to be more accurate than professional human judgements—concluded that datadriven approaches should not be abandoned but rather proven to be free of the kinds of biases that could lead to disparate impacts in the specific contexts in which they are applied.217 For a more in-depth discussion of this topic, see “Concerns About Bias in Risk and Needs Assessments” in CRS Report R44087, Risk and Needs Assessment in the Federal Prison System, by Nathan James.

The U.S. National AI R&D Strategic Plan also discusses the challenges and potential approaches to designing and building ethical AI. The plan echoes concerns about the susceptibility of data- intensive AI algorithms to error and misuse without the proper collection and use of data to train the systems. It calls for researchers to design systems so that their actions and decisionmaking are more transparent and easily interpretable, and they can be examined for bias. The plan further states, “Ethics is inherently a philosophical question while AI technology depends on, and is limited by, engineering.… However, acceptable ethics reference frameworks can be developed to guide AI system reasoning and decisionmaking in order to explain and justify its conclusions and actions.” To achieve these goals, the plan notes that there is a need for multidisciplinary, fundamental research in designing architectures for AI systems to incorporate ethical reasoning.218 While such fundamental research is being conducted, and while various groups work on developing standards and benchmarks for evaluating algorithms, some stakeholders have called for a risk-based, sector-specific approach to considering uses and potential regulations for AI algorithms. For example, some have called for more initial and ongoing testing and evaluation of algorithms and AI technologies for potential bias that directly impact U.S. Alexandra Chouldechova, “Fair Prediction with Disparate Impact: A Study of Bias in Recidivism Prediction Instruments,” Big Data, vol. 5, no. 2 (June 2017), pp. 153-163, at https://pubmed.ncbi.nlm.nih.gov/28632438/. 217 Ibid. 218 NSTC Select Committee on Artificial Intelligence 2019 AI R&D Strategic Plan, pp. 21-22. 216

Artificial Intelligence

61

citizens’ lives and livelihoods (e.g., through healthcare or hiring systems)— sometimes referred to as “high risk” or “systems critical” uses.219 Some Members of Congress have previously requested information from federal agencies about their use of AI, such as the use of facial recognition technology in law enforcement, and how the agencies balance the potential to solve crimes and catch criminals with the potential risks to privacy and civil rights.220

Types of Bias Definitions and understanding of terms such as bias and fairness can vary by discipline (e.g., technologists vs. lawyers vs. civil society), type (e.g., statistical vs. social bias), and scope (e.g., individual vs. systemic/structural). Further, there are various types of bias, and bias can show up in algorithms, including AI algorithms, in a variety of ways, including in the data, within the system, and from the people designing and using the system. There is significant concern that biases and errors in datasets used to train AI systems will result in outcomes that reflect, and possibly amplify, those biases. For example, using a dataset that has historical inequities engrained in it—such as past employment or access to credit, both of which have a history of racial discrimination—can perpetuate bias and inequity. Limited datasets that are not representative of the population to which they will be applied may lack generalizability and subsequently not work equally well for everyone. For example, some facial analysis software has been shown to have significant gender and skin color classification bias, often accurately identifying white males while failing to accurately classify darker female faces one in three times.221 Another study found that two prominent research-image collections display gender bias in their depiction of activities such as cooking and sports; ML algorithms trained on these collections not only mirrored, but amplified, these biases.222 219

See, for example, a discussion of racial bias in health care decisionmaking software used by hospitals in Heidi Leford, “Millions of Black People Affected by Racial Bias in HealthCare Algorithms,” Nature News, vol. 574 (October 26, 2019), pp. 608-609. 220 See for example, Letter from Senator Ron Wyden et al. to Gene L. Dorado, Comptroller General of the United States, July 31, 2018, at https://www.wyden.senate.gov/ download/07312018-gao-facial-recognition-request. 221 See work conducted by the Gender Shades project by Joy Buolamwini at the Massachusetts Institute of Technology’s (MIT’s) Media Lab, at https://www.media.mit.edu/projects/ gender-shades/overview/. 222 Jieyu Zhao et al., “Men Also Like Shopping: Reducing Gender Bias Amplification Using Corpus-level Constraints,” Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, Copenhagen, Denmark, September 7, 2017, pp. 2979-2989, at https://www.aclweb.org/anthology/D17-1323.

62

Laurie A. Harris

Certain variables may reflect societal inequities and stand in as proxies for protected classes of data (e.g., race, sex), inadvertently perpetuating prohibited discriminatory practices. Practices that result in disparate impacts may violate various laws, such as equal credit or employment opportunity laws.223 For example, the algorithm in the COMPAS tool (see the criminal justice box above) purports to predict the risk of future criminal activity, but it relies on inputs such as arrest history; variations in historical policing could reflect over-policing of certain communities leading to a higher number of arrests and higher correlation with crime while not accurately reflecting the likelihood of recidivism.224 While the above example is for a relatively simple statistical algorithm, the “black box” problem with many complex AI systems may make assessments of such bias harder to evaluate and correct. Identifying and addressing machine bias is a challenging problem, fueling a growing subfield of AI research. In trying to address pronoun gender bias in its “smart compose” feature, which automatically completes sentences for users as they type, Google opted to ban the use of gendered pronouns, stating that currently, “the only reliable technique we have is to be conservative.”225 Beyond these arguably unintentional instances of bias perpetuation and amplification, concerns have been raised about the potential for intentional introduction of bias into algorithms through the release or use of manipulated training data.226 Additionally, what has been termed automation bias can occur when people trust the interpretations of an automated system over their own senses and instincts, expecting the algorithmic outcomes to be objective calculations since they are being performed by a computer, rather than an individual person

223

Federal Trade Commission, Big Data: A Tool for Inclusion or Exclusion?, January 2016, p. 19; “While specific disparate impact standards vary depending on the applicable law, in general, disparate impact occurs when a company employs facially neutral policies or practices that have a disproportionate adverse effect or impact on a protected class.” 224 Courtland, 2018. 225 Paresh Dave, “Fearful of Bias, Google Blocks Gender-Based Pronouns from New AI Tool,” Reuters, November 27, 2018, at https://www.reuters.com/article/us-alphabet-google-aigender/fearful-of-bias-google-blocks-gender-based-pronouns-from-new-ai-toolidUSKCN1NW0EF. The article further notes that gender-based pronoun biases are a widespread challenge for companies using AI for features such as natural language generation (NLG) and translation services. 226 Douglas Yeung, “When AI Misjudgment Is Not an Accident,” Scientific American, October 19, 2018, at https://blogs.scientificamerican.com/observations/when-ai-misjudgment-is-notan-accident.

Artificial Intelligence

63

making a decisions.227 However, even some particularly complex AI algorithms such as deep neural networks that can work exceedingly well the majority of the time can have catastrophic failures, breaking in unpredictable ways.228 For example, researchers have demonstrated that placing black and white stickers on a stop sign can cause a neural network to misclassify the sign—for example, as a 45 miles-per-hour speed limit sign—over 80% of the time.229 Broadly, the debate around how to address bias and ethics in decisionmaking algorithms has resulted in calls for additional transparency, which raises its own sets of opportunities and challenges and questions about how best to enhance transparency. On one hand, engaging a broader set of stakeholders and providing information to those affected and journalists investigating the tools generally helps to foster trust and lead to fewer problems with bias and inequities. However, just providing all of the parameters of a model may not lead to better information about how it works. Further, providing too much information may allow people to game the system, and could provide a disincentive for private sector developers wishing to license their software. One compromise that has been proposed in this situation is to require confidential third-party auditing of proprietary software with publicly released results of such audits.230 Policy Considerations. Some considerations for potential policy responses to these issues include

 

whether and how to increase access to public datasets to train AI systems for use in the public and private sectors; requirements for auditing and/or disclosing AI algorithms— particularly in high- impact areas such as social services, criminal justice, and healthcare—and direction to NIST to facilitate related standards and certifications for third-party auditors;

John Zerilli et al., “Algorithmic Decision-Making and the Control Problem,” Minds and Machines, vol. 29 (2019), pp. 555-578, at https://link.springer.com/article/10.1007/s11023019-09513-7. 228 Douglas Heaven, “Why Deep-Learning AIs Are So Easy to Fool,” Nature, vol. 574 (October 9, 2019), pp. 163-166, at https://www.nature.com/articles/d41586-019-03013-5. 229 Kevin Eykholt et al., “Robust Physical-World Attacks on Deep Learning Visual Classification,” 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, June 18-23, 2018, at https://ieeexplore.ieee.org/document/8578273. 230 See, for example, Oren Etzioni, and Michael Li, “High-Stakes AI Decisions Need to Be Automatically Audited,” Wired, July 18, 2019, at https://www.wired.com/story/ai-needs-tobe-audited/. 227

64

Laurie A. Harris

 

 

mechanisms for recourse when people are subject to decisions in high-impact areas in which AI systems were used; facilitating the growth of multidisciplinary and diverse teams of experts for developing and training AI systems, including having people who will be using and affected by the systems as part of the design conversations; encouraging training for AI researchers and designers in thinking about and designing systems that improve fairness, transparency, and accountability; and whether to continue or expand investments into AI R&D broadly and for more narrowly specified areas, such as those that facilitate transparency and auditability (e.g., explainable AI).

Chapter 2

Trustworthy AI: Managing the Risks of Artificial Intelligence * Committee on Science, Space, and Technology Thursday, September 29, 2022 House of Representatives, Subcommittee on Research and Technology, Committee on Science, Space, and Technology, Washington, D.C. The Subcommittee met, pursuant to notice, at 10:42 a.m., in room 2318, Rayburn House Office Building, Hon. Haley Stevens [Chairwoman of the Subcommittee] presiding.

U.S. House of Representatives, Committee on Science, Space, and Technology, Subcommittee on Research and Technology, Hearing Charter, Trustworthy AI: Managing the Risks of Artificial Intelligence Thursday, September 29, 2022 10:30 am – 12:30 pm 2318 Rayburn House Office Building and Online via Zoom *

This is an edited, reformatted and augmented version of Hearing before the Subcommittee on Research and Technology, of the Committee On Science, Space, and Technology, of the House of Representatives, One Hundred Seventeenth Congress, Second Session, Serial No. 117–70, dated September 29, 2022.

In: Artificial Intelligence Editor: Gary Dalton ISBN: 979-8-89113-493-5 © 2024 Nova Science Publishers, Inc.

66

Committee on Science, Space, and Technology

Purpose On Thursday, September 29, 2022, the Subcommittee on Research and Technology of the Committee on Science, Space, and Technology will hold a hearing to discuss tools, best practices, and challenges in the design, development, testing, and deployment of trustworthy artificial intelligence (AI) systems. The Subcommittee will examine efforts in academia, industry, and government to create a culture of responsibility around AI systems, identify and remove harmful bias in AI systems, improve explainability and transparency of AI systems, and mitigate other risks associated with AI systems. The Subcommittee will also explore the National Institute of Standards and Technology’s ongoing efforts to create an artificial intelligence risk management framework.

Witnesses    

Ms. Elham Tabassi, Chief of Staff, Information Technology Laboratory, National Institute of Standards and Technology Dr. Charles Isbell, Dean and John P. Imlay, Jr. Chair of the College of Computing, Georgia Institute of Technology Mr. Jordan Crenshaw, Vice President of the Chamber Technology Engagement Center, U.S. Chamber of Commerce Ms. Navrina Singh, Founder and Chief Executive Officer, Credo AI

Overarching Questions 





What are the risks that can arise from the development and deployment of AI systems, including how harmful biases can arise in these systems? What are the activities being undertaken by academia, industry, and the government to develop, test, and responsibly deploy trustworthy AI systems? How should the United States encourage more organizations to think critically about risks that arise from AI systems, including at the earliest stages of development?

Trustworthy AI: Managing the Risks of Artificial Intelligence



67

Where should the Federal government focus efforts to promote the development and deployment of trustworthy artificial intelligence across every sector of the economy?

Background Artificial intelligence refers to the theory and development of computer systems that can perform tasks that would normally require human intelligence, such as decision making or speech recognition. Modern AI systems are engineered or machine-based systems that can, for a given set of human-defined objectives and with varying levels of autonomy, generate predictions, recommendations, or decisions influencing real or virtual environments.1 All applications of artificial intelligence in use today can be considered “narrow AI,” or AI that is designed to do a very specific set of tasks. In contrast, artificial general intelligence is a theoretical system that possesses generalized human cognitive abilities and, when presented with an unfamiliar and complex problem, could develop solutions drawing from contextual knowledge. Modern systems are likely decades away from achieving artificial general intelligence. Most AI systems are developed using a technique called machine learning, which involves developing an algorithmic model based on input data, then using that model to make certain optimizations or predictions. An example of this is image recognition, in which a set of human-labeled images (e.g., “traffic lights” in CAPTCHA tests that users take when logging into a website) are fed into an algorithm, which then looks for patterns common to all images with a specific label. The algorithm builds a model (i.e., “learns”) from this “training data,” so when it is presented with an unlabeled image containing one of the objects that was in the training data, it can make a guess as to what the object is. This method of training algorithms with humanlabeled data is called “supervised learning.” There is also “unsupervised learning,” in which no labels are provided, and the algorithm simply looks for similarities and groups images into clusters based on certain characteristics. Additionally, there is “reinforcement learning,” in which an algorithm interacts with its environment, executes actions, and learns through trial and error.

1

“AI Risk Management Framework: Second Draft,” NIST, August 18, 2022.

68

Committee on Science, Space, and Technology

While AI systems have been in use in the commercial sector for decades, recent advances in computing, improved software engineering, and better access to large data sets have markedly increased the capabilities of AI systems. As a result, AI systems have led to a wide range of innovations with the potential to benefit nearly all aspects of our society and support our economic and national security. AI systems are increasingly used in scientific research to help sort and analyze massive amounts of data in fields such as weather prediction, cosmology, and genetics research. Recent advances in natural language processing and image generation have led to AI systems that can write text or generate art.2

AI Risks While AI-systems have the potential to improve our lives, in sometimes transformative ways, they also have the potential to do significant harm if risks associated with these systems are not mitigated. While risks to any type of information-based system also apply to AI systems (e.g., privacy, cybersecurity, and safety concerns), these systems also create a set of risks that require specific consideration. AI systems can amplify, perpetuate, and exacerbate existing structural inequalities in our society, or create new ones. AI systems can also exhibit unintended properties with potential ethical, safety, or security consequences for individuals or communities. Risks associated with AI systems arise from the data used to train the AI system, the system itself, the use of the system, or interaction of people with the system. Importantly, AI systems and their associated risks are socio-technical, meaning they are a product of the complex human, organizational, and technical factors involved in their design, development, and use. For example, questions of fairness or equity caused by the decisions of AI systems relate to societal dynamics and human behavior. Purely technical solutions will not solve societal challenges.

Harmful Bias One major set of risks caused by AI systems is harmful bias, which can occur when an algorithm produces results that are systemically prejudiced due to erroneous assumptions in the machine learning process. Bias can be 2

“GPT-3 Powers the Next Generation of Apps,” OpenAI, March 25, 2021; “DALL·E: Creating Images from Text,” OpenAI, January 5, 2021.

Trustworthy AI: Managing the Risks of Artificial Intelligence

69

introduced purposefully or inadvertently into an AI system, or it can emerge as the system is being deployed. For example, a facial recognition system trained mostly on light-skinned faces will perform poorly identifying faces with darker skin, and a facial recognition system trained to perfection in a lab may fail when encountering real-world scenarios. Moreover, intentional or unintentional changes during training may fundamentally alter AI system performance. According to the National Institute of Standards and Technology (NIST), there are three categories of bias.3 First, systemic biases result when AI systems create advantages for certain social groups while disadvantaging others. Systemic bias is also referred to as institutional or historical bias. Systemic biases can creep their way into datasets or can be reinforced by institutional norms, practices, and processes across the AI lifecycle. Second, statistical and computational biases result from errors that occur due to a sample that the AI system is trained on not being representative of the population. These biases often arise when algorithms are trained on one type of data and cannot extrapolate beyond those data. Finally, human biases reflect systematic errors in human thought. These biases are often implicit and tend to relate to how an individual or group perceives information to make a decision or fill in missing or unknown information. Because AI systems are designed by humans, this type of bias is present across the entire AI lifecycle. Not all bias is harmful. Statistical and computational biases that arise in an analysis are a normal part of data science. Bias can also be beneficial, such as algorithms that use data on an individual’s habits to tailor new content based on their interests. However, many cases of bias can cause significant harm. For example, a self-driving car trained by driving on the roads of Boston may not recognize different patterns in other cities, and an AI diagnostic tool trained on x-ray images of younger patients may fail to perform well on older patients. Combatting harmful bias in AI will require better alignment between AI tasks and actual human goals. While it will require additional technology expertise to improve the detection and mitigation of bias, it will also require an understanding of the relevant social and ethical considerations.

Explainability and Interpretability Some AI systems are functionally black boxes, which means it is difficult to understand why algorithms make the decisions that they do. For example, one 3

Reva Schwartz et al., “Towards a Standard for Identifying and Managing Bias in Artificial Intelligence,” NIST, March 2022.

70

Committee on Science, Space, and Technology

type of machine learning system is called a “neural network,” which consists of thousands or even millions of simple processing nodes that are densely interconnected. Training data is fed to the bottom layer and as it passes through the succeeding layers it gets multiplied and added together in complex ways, until it finally arrives at the output layer in its transformed final state. Due to the complexity, scientists are unable to fully understand these interactions in a useful way. Observers can only effectively assess this process by reviewing an algorithm’s inputs and outputs. This challenge has given rise to fields of research focused on assessing and understanding algorithmic decisions. For example, researchers and companies are working to improve algorithmic explainability, or the ability of algorithms to explain their decisions. However, modern explainability techniques come with trade-offs— improving the explainability of algorithms has often come at the cost of accuracy of outputs.4 In contrast, some researchers are focused on interpretability, which refers to techniques used to understand the meaning of AI systems’ output in the context of its designed functional purpose. One area of focus for interpretability is called test, evaluation, validation, and verification (TEVV), which uses separate AI actors to examine an AI system or its components or detect and remediate problems throughout the AI lifecycle.5

Safety Ensuring AI systems are safe means preventing them from leading to physical or psychological harm, or creating a state in which human life, health, property, or the environment is endangered.6 One major challenge of AI safety is ensuring the system can continue to operate safely in unfamiliar situations. For example, modern autonomous vehicles can only operate in certain environments under certain conditions in a safe manner.7 Another challenge to achieving AI safety is avoiding misspecification, or poor alignment between an AI behavior and the system designer’s intentions. Misspecification occurred in YouTube’s video recommendation algorithm when an AI system

Cynthia Ruden, “Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead,” Nature Machine Intelligence, vol. 1, 2019, 206–215. 5 “AI Risk Management Framework: Second Draft,” NIST. 6 Ibid. 7 Several automakers have achieved level four automation in their vehicles. See “Levels of Automation” NHTSA, accessed September 22, 2022. 4

Trustworthy AI: Managing the Risks of Artificial Intelligence

71

that was optimized for user engagement unintentionally directed users to extremist content.8 Safety issues are mostly dealt with through careful design, planning, and testing to prevent failures, conditions, or environments in which it becomes dangerous to use an AI system. According to NIST, practical approaches to AI safety include “rigorous simulation and in-domain testing, real-time monitoring, and the ability to shut down or modify systems that deviate from intended or expected functionality.”9

Cybersecurity and Privacy While AI systems are susceptible to the same privacy and security risks as all information-based systems, there are some concerns that are unique to AI systems. AI systems have more complex attack surfaces that can enable malicious actors to compromise their security more easily. For example, malicious actors could theoretically make alterations to open-source datasets to manipulate an AI system to produce an inaccurate or harmful result.10 Similarly, AI systems could be trained outside an organization’s security controls or trained in one domain and then “fine-tuned” for another, resulting in vulnerabilities. As a result, existing privacy and cybersecurity guidance are ill-equipped to ensure the data protection of AI systems. Computational Costs Training AI systems requires a large amount of computational power. Since 2012, the amount of computational power used to train the largest AI systems has been increasing exponentially—doubling every 3.4 months.11 A paper in 2019 found that training a single large-scale AI system required five times as much carbon as the lifetime emissions of the average American car.12 If the United States is to avert the climate crisis while maintaining its global leadership in AI, the research community and tech industry should explore

Homa Hosseinmardi et al., “Examining the consumption of radical content on YouTube,” Complex Networks & Their Applications, hosted on Proceedings of the National Academy of Sciences, 2022, 166-177. 9 “AI Risk Management Framework: Second Draft,” NIST. 10 Andrew Lohn, “Poison in the Well,” Center for Security and Emerging Technology, June 2021. 11 Jack Clark, “AI and Compute,” OpenAI, May 16, 2018. 12 Emma Strubell et al., “Energy and Policy Considerations for Deep Learning in NLP,” In the 57th Annual Meeting of the Association for Computational Linguistics (ACL), stored in arxiv, July 2019. 8

72

Committee on Science, Space, and Technology

more efficient AI training methodologies and more efficient computing systems.

Government Action In December 2020, Congress enacted the National Artificial Intelligence Initiative Act or NAIIA (P.L. 116-283). This bipartisan legislation, which was led by the House Science Committee, accelerated and coordinated Federal investments and new public-private partnerships in research, standards, and education in trustworthy artificial intelligence. The law establishes interagency coordination and strategic planning efforts in AI research, development, standards, and education through an Interagency Coordination Committee and a coordination office managed by the Office of Science and Technology Policy (OSTP). The legislation also created the National AI Advisory Committee (NAIAC) to assess the implementation of the law, track advancements in AI science, and propose recommendations to advance U.S. competitiveness in AI. The Department of Commerce selected members for the NAIAC in May 2022, with the plan to publish a report in 2023.13 Finally, the legislation directed the Department of Energy (DOE), the National Science foundation (NSF), and Department of Commerce research agencies to conduct AI-related activities, many of which are designed to assess and mitigate AIrelated risks.

OSTP OSTP has pursued several initiatives related to promoting trustworthy AI. In 2021, OSTP announced an effort to develop a bill of rights for an automated society, also called the “AI bill of rights.”14 OSTP has sought input from the boarder community on what this document should contain. In March 2022, OSTP also sought feedback on updating the National AI Research and Development Strategic Plan, which includes strategic aims to both

“Commerce Department Launches the National Artificial Intelligence Advisory Committee,” Department of Commerce, May 4, 2022. 14 “Join the Effort to Create A Bill of Rights for an Automated Society,” White House, November 10, 2021. 13

Trustworthy AI: Managing the Risks of Artificial Intelligence

73

“understand the ethical, legal, and societal implications of AI” and “ensure the safety and security of AI systems.”15

National Institute of Standards and Technology NIST, which is housed within the Department of Commerce, conducts fundamental and applied research and measurement activities to cultivate trust and improve the design, development, and governance of AI systems. NIST published principles of explainable AI in 2020 before NAIIA was enacted.16 In NAIIA, Congress directed NIST to expand upon these efforts by developing a voluntary AI risk management framework through collaboration with stakeholders across public and private sectors. To date, NIST has held two workshops to develop the AI risk management framework, released two drafts of the framework, and published a draft playbook to help with implementation.17 NIST plans to publish the first version of the AI risk management framework in January 2023. In addition, NIST is conducting several other trustworthy AI-related activities, including: 







Developing taxonomy, terminology, and testbeds for measuring risks in AI systems and informing the standards needed for key technical characteristics of AI trustworthiness. Developing data characterizations, key practices for data documentation, and datasets that the broader community can use to test or train AI systems while preserving privacy and cybersecurity. Coordinating across the government and with industry stakeholders to identify critical standards development activities, strategies, and gaps for trustworthy AI.18 Developing guidance to facilitate voluntary data sharing arrangements among industry, federally funded research centers, and federal agencies to advance AI research and technologies.

Office of Science and Technology Policy, “Request for Information to the Update of the National Artificial Intelligence Research and Development Strategic Plan,” Federal Register, February 2, 2022. 16 P. Jonathon Phillips et al., “Four Principles of Explainable Artificial Intelligence,” NIST, September 2021. 17 “AI Risk Management Framework: Second Draft,” NIST. 18 “U.S. LEADERSHIP IN AI: A Plan for Federal Engagement in Developing Technical Standards and Related Tools,” NIST, August 9, 2019. 15

74

Committee on Science, Space, and Technology

National Science Foundation Achieving the responsible design and deployment of AI also requires integrating ethics into technology education and research at every stage—from K-12 education to AI developers. It requires viewing AI as an interdisciplinary field rather than a purely technical field. NSF funds university research across all non-biomedical disciplines (including social sciences) and numerous STEM education programs. As a result, the agency will play a key role in achieving these goals. In NAIIA, Congress directed NSF to make awards supporting research that contributes to the development of trustworthy AI, supports K-12, undergraduate, and graduate education on trustworthy AI, and creates faculty technology ethics fellowships to support more research into the field of technology ethics.19 Moreover, the CHIPS and Science Act of 2022 (P.L. 117-167) directs NSF to establish a requirement for an ethics statement in award proposals to ensure researchers are considering the social implications of their work.20 NSF also funds a network of 18 AI research institutes, each devoted to a different sector or AI-related challenge. This combined investment of $220 million reaches a total of 40 states and the District of Columbia. In 2021, NSF announced a partnership with NIST to establish an AI research institute on trustworthy AI.21 The winner of this solicitation should be announced later this year. International There are also international conversations taking place surrounding the development and responsible deployment of trustworthy AI. The Organisation for Economic Cooperation and Development (OECD) adopted a set of AI principles for guiding governments in responsible stewardship of trustworthy AI in 2019.22 Many individual countries have also established their own AI strategies that incorporate ethics to various extents. Singapore was one of the first to develop an AI governance framework in 2019, later iterations of which evolved into a practical toolkit for companies to demonstrate trustworthy AI in a practical manner.23 The European Union proposed the AI Act in 2021 to 19

Rep. Eddie Bernice Johnson, The National AI Initiative Act, H.R.6216, incorporated into H.R.6395, 116th Cong. 20 Rep. Eddie Bernice Johnson, CHIPS and Science Act of 2021, H.R.4521, incorporated into H.R.4346, 117th Cong. 21 James Donlon and Rebecca Hwa, “National Artificial Intelligence (AI) Research Institutes,” NSF, November 16, 2021. 22 “OECD AI Principles,” OECD, February 2019. 23 “Singapore’s Approach to AI Governance,” Singapore Personal Data Protection Commission, May 25, 2022.

Trustworthy AI: Managing the Risks of Artificial Intelligence

75

harmonize regulations as they relate to AI systems, including a process for self-certification and government oversight of many categories of high-risk AI systems.24 Because AI risk management is a relatively new activity and organizations are required to self-certify that they control for AI-risks, there is significant uncertainty surrounding the pending EU law’s requirements. Many U.S. companies are looking to the NIST AI risk management framework as a possible solution to this dilemma.

Private Sector Action The private sector is also attempting to tackle issues related to developing and deploying trustworthy AI. Companies such as Microsoft, Google, and Intel have all published their own versions of AI ethics principles.25 Many industry groups are also engaging in their own activities to promote trustworthy AI development and deployment. For example, the U.S. Chamber of Commerce has launched a bipartisan commission on AI to “advance U.S. leadership in the use and regulation of AI technology.”26 Many of these principles developed by industry are generally abstract and lack concrete governance structures and accountability measures. However, some major technology companies have begun to develop and implement concrete measures.27 Other businesses are developing tools and practical methodologies to help organizations assess and mitigate AI-related risks. The Mozilla foundation is funding open-source AI auditing tools.28 Some companies have developed proprietary tools that enable their clientele to identify and mitigate AI risks.29 ***

“The AI Act” European Union, April 2021. “Responsible AI,” Microsoft, accessed September 21, 2022; Responsible AI Practices,” Google, accessed September 21, 2022; “Intel’s Recommendations for the U.S. National Strategy on Artificial Intelligence,” Intel, March 5, 2019. 26 “U.S. Chamber Launches Bipartisan Commission on Artificial Intelligence to Advance U.S. Leadership,” U.S. Chamber of Commerce, January 18, 2022. 27 Jon Belkowitz and Leah Koshiyama, “Trust in the Time of AI: Why Salesforce Invests in Ethical Guardrails,” Salesforce, April 19, 2022. 28 “Mozilla Technology Fund Seeks People, Projects Auditing AI Systems with Open-Source Approaches,” Mozilla Foundation, September 6, 2022. 29 For examples, please see ORCAA and Credo AI. 24 25

76

Committee on Science, Space, and Technology

Chairwoman STEVENS. Welcome to the Research and Technology hearing to examine the harmful impacts associated with artificial intelligence (AI) systems, as well as the opportunities with our artificial intelligence systems, the activities that academia, government, and industry are conducting to prevent, mitigate, and manage AI risks as these new technologies proliferate. I’m thrilled to be joined by this distinguished panel of witnesses, all of whom are in the room with us today. It is great to see your faces and to be together the first time since a March 2020 hearing, I believe. It is also of deep importance to be discussing the benefits and the challenges of artificial intelligence, the potential to influence many aspects of our lives and support our economic and national security. The applications in our everyday lives span from merely convenient like recommending your next movie, to transformational, like aiding doctors in earlier detection of disease. In my home State of Michigan, advances in artificial intelligence by automakers are accelerating the development of autonomous vehicles that will lead to reduced traffic and increased road safety. Artificial intelligence systems are also increasingly used to analyze massive amounts of data to propel research in fields to enhance our understanding of the universe and cosmology, to synthetic biology, to weather prediction. Call our ancestors. But ill-conceived or untested applications of artificial intelligence have also on occasion caused damage. We have already seen ways AI systems can amplify, perpetuate, or exacerbate inequitable outcomes. Researchers have shown that AI systems making decisions in high-risk situations, such as credit or housing, can be biased against already disadvantaged communities, causing harm. This is why we need to encourage people developing or deploying AI systems to be thoughtful about what they’re putting out into the world. We must develop the tools, methodologies, and standards to ensure that AI products and services are safe and secure, accurate, free of harmful bias, and otherwise trustworthy. We are in a moment of trust. Since taking over this gavel of the Research and Technology Subcommittee a few years ago, I have worked with my colleagues on both sides of the aisle to promote trustworthy AI. We’re working together. I was proud to secure trustworthy AI provisions in the CHIPS and Science Act that was passed and signed into law just last month, which also promotes the—or includes the Promoting Digital Privacy Technologies Act, which passed the House and awaits a vote in the Senate, supports privacy-enhanced data sets and tools for training AI systems.

Trustworthy AI: Managing the Risks of Artificial Intelligence

77

Additionally, this Committee led the development of the 2020 National AI Initiative Act to accelerate and coordinate Federal investments in research standards and education of trustworthy AI. In that act we also directed NIST (National Institute of Standards and Technology) to develop an AI Risk Management Framework (AI RMF) to help organizations understand and mitigate the risks associated with these technologies. We’re all excited to be having today’s hearing and to discuss the progress of this work and the many other things that NIST is doing to promote trustworthy AI. Academia and industry are supporting ethical approaches to artificial intelligence. Universities across the country are adopting principles for responsible use of AI and incorporating ethics into their computer science (CS) curricula. Industry is moving past theoretical principles into practical approaches to mitigating AI risks. There’s more to do, there’s jobs to be had, and people’s lives are being impacted. With that, we’re here in Congress to ensure that the United States continues to lead the world in artificial intelligence and trustworthy artificial intelligence. And we thank our witnesses for their time. [The prepared statement of Chairwoman Stevens follows:] Good morning and welcome to today’s Research and Technology hearing to examine the harmful impacts associated with artificial intelligence systems, and the activities that academia, government, and industry are conducting to prevent, mitigate, and manage AI risks. I am thrilled to be joined by our distinguished panel of witnesses. It is great to be with you all in person today, and I look forward to hearing your testimony. Artificial intelligence has the potential to benefit many aspects of our lives and support our economic and national security. The applications in our everyday lives span from merely convenient, like recommending your next movie, to transformational, like aiding doctors in earlier detection of disease. In my home state of Michigan, advances in AI by automakers are accelerating the development of autonomous vehicles that will lead to reduced traffic and increased road safety. AI systems are also increasingly used to analyze massive amounts of data to propel research in fields to enhance our understanding of the universe in cosmology to synthetic biology to weather prediction. But ill-conceived or untested applications of AI have also caused great harm. We have already seen ways AI systems can amplify, perpetuate, or exacerbate inequitable outcomes. Researchers have shown that AI systems making decisions in high-risk situations, such as credit or housing, can be biased against already disadvantaged communities.

78

Committee on Science, Space, and Technology

This is why we need to encourage people developing or deploying AI systems to be thoughtful about what they are putting out into the world. We must develop the tools, methodologies, and standards to ensure that AI products and services are safe and secure, accurate, free of harmful bias, and otherwise trustworthy. Since taking over the gavel of the Research and Technology Subcommittee, I have worked with my colleagues on both sides of the aisle to promote trustworthy AI. I was proud to secure trustworthy AI provisions in the CHIPS and Science Act— which the President signed into law last month. My Promoting Digital Privacy Technologies Act, which passed the House and awaits a vote in the Senate, supports privacy-enhanced datasets and tools for training AI systems. Additionally, this Committee led the development of the 2020 National AI Initiative Act to accelerate and coordinate Federal investments in research, standards, and education of trustworthy AI. In that Act, we also directed NIST to develop an AI risk management framework to help organizations understand and mitigate the risks associated with these technologies. I look forward to hearing about the progress of this work and the many other things NIST is doing to promote trustworthy AI in today’s discussion. Academia and industry are also supporting ethical approaches to AI. Universities across the country are adopting principles for responsible use of AI and incorporating ethics into their computer science curricula. Industry is moving past theoretical principles into practical approaches to mitigating AI risks. But there is still much more to do. I’m looking forward to hearing more about this work from our witnesses today and to discussing what we here in Congress can do to ensure the United States leads the world in trustworthy artificial intelligence. I’d like to again thank our witnesses for joining us today. Chairwoman STEVENS. With that, the Chair is going to recognize Ranking Member Mr. Feenstra for an opening statement. Mr. FEENSTRA. Thank you, Chairwoman Stevens, for holding this important hearing today. I very much value of this hearing. And I also want to thank Ranking Member Lucas for attending today. I’m very grateful for that also. And also to the distinguished panel that we have before us, it’s—I appreciate the time and effort that you have taken to come here and to give testimony on this important topic. Artificial intelligence is fundamentally changing the way we solve some of our society’s biggest challenges. From healthcare to transportation, commerce to cybersecurity, AI technologies are revolutionizing almost every

Trustworthy AI: Managing the Risks of Artificial Intelligence

79

aspect of our daily life. But with every new and emerging technology comes new and evolving challenges and risks. Over the years, the Science Committee has held several hearings on AI, discussing challenges rang ranging from ethics to the work force needs. I hope we can use today’s hearing as an opportunity to further these important discussions and shed light on the importance of enabling safe and trustworthy AI. To do that, we have to first define what makes AI safe and trustworthy, and I believe our witnesses can help us shed light on that today. But in general, I think we can agree that safe and trustworthy AI will meet certain criteria, like including accuracy, privacy, and reliability. Additionally, it is important that trustworthy AI systems utilize robust data, while also protecting the safety and security of the user data. Some other important factors of trustworthy AI includes transparency, fairness, accountability, and the mitigation of harmful biases. These factors are particularly important to keep in mind as these technologies are being deployed for the use in our daily lives. It is also critical that the data used in AI technologies is accurate because the input data is the foundation, the literal foundation of AI. So that must be our general goal, transparent and fair AI with accurate data and strong privacy protections. We can ensure that by having the standards and evaluation methods in place for these technologies. The integration of trustworthy AI in key industries has the most potential use and significant competition to advance U.S. industry. AI and other industries of the future like quantum science can revolutionize how business and economics operate, improving efficiency, expanding services, and integrating operations. The key to these benefits, of course, is the trustworthy of AI. Here in Congress, Members of the Science Committee introduced the bipartisan National Artificial Intelligence Initiative Act in 2020, which was made into law through the Fiscal Year 2021 NDAA. The legislation created a broad national security to accelerate investments of responsible AI research, development, and standards, as well as education for AI work force. It facilitated a new public-private partnership to ensure that the United States leads the world in the development and the use of AI systems. Related to today’s hearing, the initiatives require the National Institute of Standards and Technology, NIST, to create the framework for managing risk associated with AI systems and best practices sharing to advance trustworthy AI systems. As a leader in AI research, measurement, evaluation and standards, NIST has been developing their voluntary AI Risk Management Framework since

80

Committee on Science, Space, and Technology

this last July. The framework has been developed through a consensus-driven, open, transparent, and collaborative process with multiple workshops for industry to provide input. I look forward to hearing more about the progress NIST is making in implementing this directive and finalizing this important guidance from Ms. Tabassi. I believe that AI risk management from this framework will be critical for our industry to better mitigate risk associated with AI technologies, as well as promote the incorporation of trustworthiness in every stage from design to evaluation of AI technologies. I’m also looking forward to hearing from the U.S. Chamber of Commerce to learn more about the work through the Commission on the Artificial Intelligence Competitiveness, Inclusion, and Innovation and how they are working to help build customer confidence in AI technologies. I want to thank our witnesses again for their participation. I thank Madam Chair for putting this hearing on. And with that, I yield back. [The prepared statement of Mr. Feenstra follows:] Thank you, Chairwoman Stevens, for holding today’s hearing on this important issue. And thank you, to our distinguished panel of witnesses for joining us heretoday. Artificial intelligence is fundamentally changing the way we solve some of our society’s biggest challenges. From healthcare to transportation; commerce to cybersecurity; A.I. technologies are revolutionizing almost every aspect of daily life. But with every new and emerging technology comes new and evolving challenges and risks. Over the years, the Science Committee has held several hearings on A.I., discussing challenges ranging from ethics to workforce needs. I hope we can use today’s hearing as an opportunity to further these important discussions, and to shed light on the importance of enabling safe and trustworthy A.I. To do that, we have to first define what makes A.I. safe and trustworthy. I believe our witnesses can help shed light on this today. But in general, I think we can agree that safe and trustworthy A.I. will meet certain criteria like including accuracy, privacy, and reliability. Additionally, it is important that trustworthy A.I. systems utilize robust data while also protecting the safety and security of user data. Some other important factors of trustworthy A.I. include transparency, fairness, accountability, and mitigation of harmful biases. These factors are particularly important to keep in mind, as these technologies are being deployed for use in our daily lives.

Trustworthy AI: Managing the Risks of Artificial Intelligence

81

It is also critical that data used by A.I. technologies is accurate because the input data is the foundation of A.I. So that must be our general goal: transparent and fair A.I. with accurate data and strong privacy protections. We can ensure that by having standards and evaluation methods in place for these technologies. The integration of trustworthy A.I. in key industries has the potential to be a significant competitive advantage for U.S. industry. A.I. and other industries of the future like quantum sciences can revolutionize how businesses and economies operate, improving efficiency, expanding services, and integrating operations. The key to these benefits, of course, is the trustworthiness of A.I. Here in Congress, Members of the Science Committee introduced the bipartisan National Artificial Intelligence Initiative Act of 2020, which was made law through the FY21 NDAA. This legislation created a broad national strategy to accelerate investments in responsible A.I. research, development, and standards, as well as education for the A.I. workforce. It facilitated new public-private partnerships to ensure the U.S. leads the world in the development and use of responsible A.I. systems. Related to today’s hearing, this initiative required the National Institute of Standards and Technology (NIST) to create a framework for managing risks associated with A.I. systems and best practices for sharing data to advance trustworthy A.I. systems. As a leader in A.I. research, measurement, evaluation, and standards, NIST has been developing its voluntary A.I. Risk Management Framework since last July. The framework has been developed through a consensus-driven, open, transparent, and collaborative process with multiple workshops for industry to provide input. I look forward to hearing more about the progress NIST is making in implementing this directive and finalizing this important guidance from Ms. Tabassi. I believe the A.I. Risk Management Framework will be a critical tool for industry to better mitigate risks associated with A.I. technologies as well as promote the incorporation of trustworthiness into every stage from design to evaluation of A.I. technologies. I am also looking forward to hearing from the U.S. Chamber of Commerce to learn more about their work through the Commission on Artificial Intelligence Competitiveness, Inclusion, and Innovation, and how they are working to help build consumer confidence in A.I. technologies. I want to thank our witnesses again for their participation. Madam Chair, I yield back. Chairwoman STEVENS. At some point in time, they will recall and remember that we had today’s hearing that is now actually both meeting in

82

Committee on Science, Space, and Technology

person and virtually, so a couple of reminders to Members. First, Members and staff who are attending in person may choose to be masked. It’s not a requirement. Any individuals with symptoms, a positive test, or exposure to someone with COVID–19 should wear a mask while present. Members who are attending virtually should keep their video feed on as long as they’re present in the hearing. Members are responsible for their own microphones. Please keep your microphones muted or off unless you are speaking. Additionally, if Members have documents they wish to submit for the record, please keep them—or please email them to the Committee Clerk, whose email address was circulated prior to the hearing. If there are Members who wish to submit additional opening statements, your statements will be added to the record at this point. [The prepared statement of Chairwoman Johnson follows:] Thank you, Chairwoman Stevens and Ranking Member Feenstra, for holding today’s hearing. And welcome to our esteemed panel of witnesses. We are here today to learn more about the development of trustworthy artificial intelligence and the work being done to reduce the risks posed by AI systems. Recent advances in computing and software engineering, combined with an increase in the availability of data, have enabled rapid developments in the capabilities of AI systems. These systems are now deployed across every sector of our society and economy, including education, law enforcement, medicine, and transportation. These are sectors for which AI carries the potential for both great benefit, and great harm. One significant risk across sectors is harmful bias, which can occur when an AI system produces results that are systemically prejudiced. Bias in AI can amplify, perpetuate, and exacerbate existing structural inequalities in our society, or create new ones. The bias may arise from non-representative training data, implicit biases in the humans who design the system, and many other factors. It is often the result of the complex interactions among the human, organizational, and technical factors involved in the development of AI systems. Consequently, the solution to these problems is not a purely technical one. We must ensure that the writing, testing, and deployment of AI systems is an inclusive, thoughtful and accountable process that results in AI that is safe, trustworthy, and free of harmful bias. That goal remained central in our development of the National Artificial Intelligence Initiative Act, which I led alongside Ranking Member Lucas and which we enacted last Congress. In the National AI Initiative Act, we directed

Trustworthy AI: Managing the Risks of Artificial Intelligence

83

the National Science Foundation (NSF) to support research and education in trustworthy AI. As we train the next generation of AI researchers, we must not treat ethics as something separate from technology development. The law specifically directs NSF to integrate ethics research and technology education from the earliest stages and establishes faculty fellowships in technology ethics. The recently enacted CHIPS and Science Act further directs NSF to require ethics statements in its award proposals to ensure researchers consider the potential societal implications of their work. As we will learn more about today, the National AI Initiative Act also directed the National Institute of Standards and Technology to develop a framework for trustworthy AI, in addition to carrying out measurement research and standards development to enable the implementation of such a framework. While AI systems continue to make rapid progress, the activities carried out under the National AI Initiative Act will be key to grappling with the sociotechnical questions posed by rapidly advancing AI systems. I look forward to hearing more from our witnesses today and to discussing what more the United States can do to ensure we are the world leader in the development of trustworthy AI. Thank you, and I yield back my time. Chairwoman STEVENS. And at this time, I’d like to introduce our witnesses. Our first witness is Elham Tabassi. Ms. Tabassi is the Chief of Staff for the Information Technology Laboratory at the National Institute of Standards and Technology. She leads NIST’s trustworthy and responsible AI program that aims to cultivate trust in the design, development, and use of AI technologies by improving measurement science, standards, and related tools. Ms. Tabassi is a member of the National AI Research Task Force and has been at NIST since 1999. Our next witness is Dr. Charles Isbell. Dr. Isbell is the Dean and John P. Imlay, Jr. Chair of the College of Computing at Georgia Tech. His recent work focuses on building autonomous systems that can interact with large numbers of other intelligence agents, including humans and AI systems. Dr. Isbell also studies the effects of AI bias and pursues reform in computing education, focusing on broadening participation and access. He is an elected fellow of AAAI (Association for the Advancement of Artificial Intelligence), ACM (Association for Computing Machinery), and the American Academy of Arts and Sciences. Our third witness is Mr. Jordan Crenshaw. Mr. Crenshaw serves as the Vice President of the U.S. Chamber of Commerce’s Technology Engagement Center. He also manages the Chamber’s Privacy Working Group and which is

84

Committee on Science, Space, and Technology

comprised of nearly 300 companies and trade associations in which developed model privacy legislation and principles. Prior to his current position, Mr. Crenshaw led the Chamber’s Telecommunication and E-Commerce Policy Committee, which analyzes Federal privacy, cloud computing, broadband internet, e-commerce and broadcast policies. Our final witness is Ms. Navrina Singh. Ms. Singh is the Founder and Chief Executive Officer (CEO) of Credo. Credo AI helps organizations to monitor, measure, and manage AI introduce risk. Prior to co-founding Credo AI, Ms. Singh was the Director and Principal of Product in Microsoft Cloud and AI, where she built natural language-based conversational AI products. Currently, Ms. Singh serves as a member of the National AI Advisory Committee, which is tasked with advising the President and the National AI Initiative Office on topics related to the National AI Initiative. As our witnesses know—should know, you will each have 5 minutes for your spoken testimony. Your written testimony will be included in the record for the hearing. They’re great testimonies. When you have completed your spoken testimony, we’ll begin with questions. Each Member will have 5 minutes to question the panel. We will start with Ms. Tabassi.

Testimony of Ms. Elham Tabassi, Chief of staff, Information Technology Laboratory, National Institute of Standards and Technology Ms. TABASSI. Good morning, Chairwoman Stevens, Ranking Member Feenstra, and distinguished Members of the Subcommittee. I am Elham Tabassi, and I serve as the lead for the Trustworthy and Responsible AI program at the Department of Commerce’s National Institute of Standards and Technology known as NIST. Thank you for the opportunity to testify today on NIST’s effort to advance the trustworthy and responsible development and use of artificial intelligence. This Committee is well aware of the importance of advancing research and standards to cultivate trust in AI. Thank you for your dedication to this important issue and for your support of NIST’s role. Artificial Intelligence holds the promise to revolutionize and enhance our society and economy, but the development and use of these systems are not without challenges or risks. Through robust collaboration with stakeholders across government, industry, civil groups, and academia, NIST works to

Trustworthy AI: Managing the Risks of Artificial Intelligence

85

advance research, standards, measurements, and tools to manage these risks and realize the full promise of this technology for all Americans. Among its work, NIST is developing the AI Risk Management Framework, or AI RMF, to provide guidance on mapping, measuring, and managing risks associated with AI. Like the well-known cybersecurity and privacy frameworks, the AI RMF will provide a set of outcomes that enable dialog, understanding, and actions to manage AI risks. Critically, the framework will focus on managing risks not just to organizations, but also to individuals and society. This approach is reflective of the sociotechnical nature of AI systems as a product of the complex human, organizational, and technical factors involved in their design and development. As is the case with all our publications, NIST is taking a stakeholderdriven and open process to coordinate the development of the framework. From the start of this initiative last year, NIST has engaged a broad range of stakeholders, including through several workshops and public comment opportunities. Based on stakeholder feedback, and consistent with congressional direction, NIST is on track to publish the final AI RMF 1.0 in January 2023. The technology and standards landscape for AI will continue to evolve. Therefore, NIST intends for the framework and related guidance to be updated over time to reflect new knowledge, awareness, and practices. Building off the RMF there is much more work to do to develop additional guidance, standards, measures, and tools to evaluate and measure AI trustworthiness, especially for specific characteristics and use cases. For example, NIST has significantly expanded its research efforts to mitigate harmful bias with a focus on sociotechnical approach. To support the advancement of AI standards, NIST seeks to bolster knowledge, leadership, and coordination on AI, including by engaging with other government agencies within United States and internationally. NIST engages with partners around the world, including through the Organization for Economic Cooperation and Development, OECD, and the U.S.-EU Trade and Technology Council (TTC) to advance shared goals in trustworthy and responsible AI. NIST also coordinates with other Federal agencies and leads several policymaking and interagency efforts. This includes administering the National Artificial Intelligence Advisory Committee or NAIAC, which advises the President and the National AI Initiative Office. Advancing research and standards that contribute to more secure, private, fair, rights-affirming, and world-leading digital economy is a top priority for

86

Committee on Science, Space, and Technology

NIST. Thank you for the opportunity to present on NIST’s activities to improve trustworthy and responsible AI. I look forward to your questions. [The prepared statement of Ms. Tabassi follows:]

Testimony of Elham Tabassi, Chief of Staff, Information Technology Laboratory, National Institute of Standards and Technology, United States Department of Commerce, before the United States House of Representatives, Committee on Science, Space, and Technology, Subcommittee on Research and Technology, Trustworthy AI: Managing the Risks of Artificial Intelligence, September 29, 2022 Chairwoman Stevens, Ranking Member Feenstra, and distinguished members of the Subcommittee, I am Elham Tabassi, Chief of Staff of the Information Technology Laboratory (ITL) and the lead for NIST’s trustworthy and responsible AI program at the Department of Commerce’s National Institute of Standards and Technology – known as NIST. We appreciate the committee’s continued support of our work and thank you for the opportunity to testify today on NIST’s efforts to improve the trustworthiness of artificial intelligence. NIST is home to five Nobel Prize winners, with programs focused on national priorities such as cybersecurity, advanced manufacturing, semiconductors, the digital economy, precision metrology, quantum information science, biosciences and artificial intelligence. NIST’s mission is to promote U.S. innovation and industrial competitiveness by advancing measurement science, standards, and technology in ways that enhance economic security and improve our quality of life. In the NIST Information Technology Laboratory, we work to cultivate trust in information technology and metrology. Trust in the digital economy is built upon key attributes like cybersecurity, privacy, usability, interoperability, equity, and avoiding bias and increasing usefulness in the development and deployment of technology. NIST conducts fundamental and applied research, advances standards to understand and measure limits and capabilities of technology and develops tools to evaluate such measurements. Technology standards and measurements—and the foundational and applied research that enables their development and use—are critical to advancing trust in digital products and services. These standards and measurements can

Trustworthy AI: Managing the Risks of Artificial Intelligence

87

provide increased assurance and utility, thus enabling more secure, private, and rights-affirming technologies.

NIST’s Role in Artificial Intelligence NIST contributes to the research, standards, measurements, and data required to realize the full promise of artificial intelligence (AI) as a tool that will enable American innovation, enhance economic security, and improve our quality of life. As a non-regulatory agency, NIST prides itself on the strong partnerships it has cultivated with the government and private sector. NIST seeks and relies on diverse stakeholder feedback among government, industry, academia, and non-profit entities to develop and improve its resources. The collaborative, transparent, and open processes NIST uses to develop resources result in more effective and usable resources that are trusted, and therefore, widely used by various organizations. Our resources are used by federal agencies, as well as private sector organizations of all sizes, educational institutions, and state, local, tribal, and territorial governments. Much of NIST’s AI effort30 focuses on cultivating trust in the design, development, and use of AI technologies and systems. Working with the community, NIST is: 

   

30 31

conducting fundamental research to advance trustworthy AI technologies and understand and measure their capabilities and limitations applying AI research and innovation across NIST laboratory programs establishing benchmarks and developing data and metrics to evaluate AI technologies leading and participating in the development of technical AI standards contributing to discussions and development of AI policies, including supporting the National AI Advisory Committee31

https://www.nist.gov/artificial-intelligence. https://www.nist.gov/artificial-intelligence/national-artificial-intelligence-advisory-committee-naiac.

88

Committee on Science, Space, and Technology

NIST AI Risk Management Framework Among its many AI-related activities, NIST is developing the AI Risk Management Framework32 (AI RMF) to provide guidance on managing risks to individuals, organizations, and society associated with AI. AI risk management is about offering a path to minimize potential negative impacts of AI systems, as well as pointing to opportunities to maximize positive impacts and creating opportunities for innovation. Identifying, mitigating, and minimizing risks and potential harms associated with AI technologies are essential steps towards the development of trustworthy AI systems and their appropriate and responsible use. Like NIST’s well-known Cybersecurity and Privacy Frameworks, the NIST AI RMF will provide a set of outcomes that enable dialogue, understanding, and actions to manage AI risks. The AI RMF is a voluntary framework seeking to provide a flexible, structured, and measurable process to address AI risks prospectively and continuously throughout the AI lifecycle. In August, NIST released its second draft of the AI RMF33 with the goal of releasing AI RMF 1.0 in January. This is consistent with congressional direction in the National Artificial Intelligence Act of 2020. This latest draft builds on the March 2022 initial draft and a December 2021 concept paper – and the many comments from organizations and individuals. NIST also released a draft AI RMF Playbook34 in August. This companion to the AI RMF when completed will provide additional guidance to organizations on the actions they can take to meet the outcomes included in the Framework. AI research and development, as well as the standards landscape, are evolving rapidly. For that reason, the AI RMF and its related documents will evolve over time and reflect new knowledge, awareness, and practices. NIST intends to continue its robust engagement with stakeholders to keep the Framework up to date with AI trends and reflect experience based on the use of the AI RMF. Ultimately, the AI RMF will be offered in multiple formats, including online versions, to provide maximum flexibility. The Framework is being developed through a consensus-driven, open, transparent, and collaborative process. From the start of this initiative, NIST has offered a broad range of stakeholders the opportunity to take part in

32

https://www.nist.gov/itl/ai-risk-management-framework. https://www.nist.gov/system/files/documents/2022/08/18/AI_RMF_2nd_draft.pdf. 34 https://pages.nist.gov/AIRMF/. 33

Trustworthy AI: Managing the Risks of Artificial Intelligence

89

workshops35, respond to a Request for Information (RFI)36, and review draft reports37 and other documents including draft approaches38 and versions of the framework39. NIST also has reached out directly to AI practitioners along with other stakeholders across a full spectrum of interests domestically and internationally. This outreach has included companies, government agencies, academia, and not-for-profit organizations representing civil society, consumers, and industry. NIST has actively encouraged others to provide direct input, and many organizations and individuals have contributed their insights to NIST. Those have included international organizations, with the goal of aligning the NIST Framework with standards and approaches being developed around the globe. The current draft AI RMF defines certain key characteristics of trustworthy AI systems and offers guidance for mapping, measuring, and managing them. As defined in the draft AI RMF, trustworthy AI is valid and reliable, safe, fair, and bias is managed, secure and resilient, accountable and transparent, explainable and interpretable, and privacy-enhanced. AI systems are socio-technical in nature, meaning they are a product of the complex human, organizational, and technical factors involved in their design, development, and use. Many of the trustworthy AI characteristics – such as bias, fairness, interpretability, and privacy – are directly connected to societal dynamics and human behavior.

NIST’s Research on AI Trustworthiness Characteristics To build on NIST’s work on the AI RMF and provide additional guidance to organizations to advance trustworthy and responsible AI, NIST also conducts fundamental research on many of the AI trustworthiness characteristics.

35

https://www.nist.gov/itl/ai-risk-management-framework/ai-risk-management-frameworkworkshops-events. 36 https://www.nist.gov/itl/ai-risk-management-framework/ai-rmf-development-requestinformation. 37 https://www.nist.gov/system/files/documents/2022/03/17/AI-RMF-1stdraft.pdf. 38 https://www.nist.gov/system/files/documents/2021/12/14/AI%20RMF%20Concept%20 Paper_13Dec2021_posted.pdf. 39 https://www.nist.gov/itl/ai-risk-management-framework.

90

Committee on Science, Space, and Technology

AI Trustworthiness Characteristics – Fair and Bias is Managed While there are many approaches for ensuring technologies that we use every day are safe and secure, there is less research into how to advance systems that are fair with bias managed. Fairness in AI includes concerns for equality and equity by addressing issues such as bias and discrimination. Standards of fairness can be complex and difficult to define because perceptions of fairness differ among cultures and may shift depending on application and context of use. NIST has significantly expanded its research efforts to identify, understand, measure, manage and mitigate bias, with a focus on a sociotechnical approach. NIST recently published “Towards a Standard for Identifying and Managing Bias in Artificial Intelligence” (NIST Special Publication 1270)40, which identifies the concepts and challenges associated with bias in AI and provides preliminary guidance for addressing them. NIST has identified three major categories of AI bias to be considered and managed: systemic, computational, and human, all of which can occur in the absence of prejudice, partiality, or discriminatory intent. Current attempts for addressing the harmful effects of AI bias remain focused largely on computational factors such as representativeness of datasets and fairness of machine learning algorithms. Human and systemic institutional and societal factors are significant sources of AI bias that are currently overlooked. Systemic bias can be present in AI datasets, the organizational norms, practices, and processes across the AI lifecycle, and the broader society that uses AI systems. Human biases relate to how an individual or group perceives and uses AI system information to make a decision or fill in missing information. Through the NIST National Cybersecurity Center of Excellence (NCCoE), we are beginning a project, “Mitigation of AI/ML Bias in Context”41, to develop additional guidance to mitigate bias in AI and Machine Learning (ML). Under the NCCoE model, NIST works collaboratively with relevant industry and academia partners. The “Mitigation of AI/ML Bias in Context,” project intends to apply the concepts in our March 2022 NIST publication on bias to build a proof-of-concept implementation, or “use case,” for credit underwriting decisions in the financial services sector. Future application use cases may also be considered, such as hiring or school admissions. These will help promote fair and positive outcomes that benefit 40 41

https://nvlpubs.nist.gov/nistpubs/SpecialPublications/NIST.SP.1270.pdf. https://www.nccoe.nist.gov/projects/mitigating-aiml-bias-context.

Trustworthy AI: Managing the Risks of Artificial Intelligence

91

users of AI/ML services, the organizations that deploy them, and all of society. A small but novel part of this project will examine the interplay between bias and cybersecurity, with the goal of identifying approaches which might mitigate risks that exist across these two critical characteristics of trustworthy AI.

AI Trustworthiness Characteristics – Explainable and Interpretable Explainability and interpretability are important characteristics to ensure users and operators of AI can understand the decisions or predications made by AI, thus avoiding the “opaque system” concept associated with AI. Explainability refers to a representation of the mechanisms underlying an algorithm’s operation, whereas interpretability refers to the meaning of an AI systems’ output in the context of its designed functional purpose. NIST has released two publications aimed at providing deeper understanding of the principles of Explainability and interpretability: “Four Principles of Explainable Artificial Intelligence” (NISTIR 8312)42 and “Psychological Foundations of Explainability and Interpretability in Artificial Intelligence” (NISTIR 8367)43. AI Trustworthiness Characteristics –Secure and Resilient AI systems that can withstand adversarial attacks and maintain confidentiality, integrity, and availability are resilient and secure systems. NIST released the draft “A Taxonomy and Terminology of Adversarial Machine Learning”44 (NISTIR 8269) to advance a taxonomy for securing applications of AI, specifically, adversarial machine learning. NIST’s Cybersecurity Framework45 is widely used to address the cybersecurity risks of organizations. NIST is constantly updating the Cybersecurity Framework to account for changes in the cybersecurity technology, standards, and risk landscape. NIST is building an experimentation testbed called Dioptra46 to begin to evaluate adversarial attacks against ML algorithms. The testbed aims to facilitate security evaluations of ML algorithms under a diverse set of conditions. To that end, the testbed has a modular design enabling researchers 42

https://www.nist.gov/publications/four-principles-explainable-artificial-intelligence. https://www.nist.gov/publications/psychological-foundations-explainability-and-interpretability-artificial-intelligence. 44 https://www.nccoe.nist.gov/ai/adversarial-machine-learning. 45 https://www.nist.gov/cyberframework. 46 https://pages.nist.gov/dioptra/. 43

92

Committee on Science, Space, and Technology

to easily swap in alternative datasets, models, attacks, and defenses. The result is the ability to advance the metrology needed to ultimately help secure AI systems.

AI Trustworthiness Characteristics – Privacy-enhanced Privacy safeguards the important human values of autonomy and dignity through methods that focus on providing individuals with anonymity, confidentiality, and control over various facets of their identities. These outcomes generally should guide choices for AI system design, development, and deployment. From a policy perspective, privacy-related risks may overlap with security, bias, and transparency. NIST’s Privacy Risk Assessment Methodology47, developed in 2016 and NIST’s Privacy Framework48, issued in 2020, are voluntary tools that organizations from all industry sectors across the world are using to identify and manage privacy risks in the systems, products and services they develop and deploy, improve their privacy programs, and better comply with privacy regulation. NIST is also conducting research on privacy-enhancing technologies (PETs) to advance data-driven, innovative solutions to preserve the right to privacy, including hosting the Privacy Engineering Collaboration Space49, a virtual public platform that serves as a clearinghouse for open-source tools and PETs use cases. In coordination with the National Science Foundation (NSF) and the White House Office of Science and Technology Policy (OSTP), NIST is co-sponsoring the U.S.-U.K. prize competition on PETs50. First announced at the Summit for Democracy in December 2021, the winning solutions will compete for a combined U.S.-U.K. prize pool of $1.6 million and will be showcased at the second Summit for Democracy anticipated in early 2023.

Research on Applications of AI NIST’s multidisciplinary laboratories and varied fields are an ideal environment to develop and apply AI51. Various AI techniques are being used 47

https://www.nist.gov/itl/applied-cybersecurity/privacy-engineering/resources. https://www.nist.gov/privacy-framework/privacy-framework. 49 https://www.nist.gov/itl/applied-cybersecurity/privacy-engineering/collaboration-space. 50 https://www.nist.gov/itl/applied-cybersecurity/privacy-engineering/collaboration-space/prize -challenges. 51 https://www.nist.gov/applied-ai. 48

Trustworthy AI: Managing the Risks of Artificial Intelligence

93

to support NIST scientists and engineers, drawing on ML and AI tools to gain a deeper understanding of and insight into our research. NIST is integrating AI into the design, planning, and optimization of NIST’s research efforts – including hardware for AI52, computer vision, engineering biology and biomanufacturing, image and video understanding, medical imaging, materials science, manufacturing, disaster resilience, energy efficiency, natural language processing, biometrics, quantum science, robotics, and advanced communications technologies. Key focus areas include innovative measurements using AI/ML techniques, predictive systems using AI/ML models, and enabling and reducing the barriers to autonomous measurement platforms.

AI Measurement and Evaluation NIST has a long history of devising appropriate metrics, measurement tools, and challenge problems to support technology development. NIST first started the measurement and evaluation of automated fingerprint identification systems in the 1960s. Evaluations strengthen research communities, establish research methodology, support the development of standards, and facilitate technology transfer. NIST is looking to bring these benefits of community evaluations to bear on the problem of constructing trustworthy AI systems. These evaluations will begin with community input to identify potential harms of selected AI technologies in context, and the data requirements for AI evaluations. NIST also hosts a biweekly AI metrology colloquia series53, where leading researchers share current work on AI measurement and evaluation. As discussed above, NIST has been engaged in focused efforts to establish common terminologies, definitions, and taxonomies of concepts pertaining to characteristics of AI technologies in order to form the necessary underpinnings for trustworthy AI systems. Each of these characteristics also requires its own portfolio of measurements and evaluations. For each characteristic, NIST aims to document and improve the definitions, applications, and strengths and limitations of metrics and measurement methods in use or being proposed.

52 53

https://www.nist.gov/artificial-intelligence/hardware-ai. https://www.nist.gov/programs-projects/ai-measurement-and-evaluation/ai-metrologycolloquia-series.

94

Committee on Science, Space, and Technology

NIST’s current efforts represent only a small portion of the research that will be required to test and evaluate trustworthy AI systems. A significant challenge in the evaluation of trustworthy AI systems is that context (the specific use case) matters; accuracy measures alone will not provide enough information to determine if deploying a system is warranted. The accuracy measures must be balanced by the associated risks or societal harms that could occur. The tolerance for error drops as the potential impacts of risk rise. New NIST efforts in AI evaluation will focus on other socio-technical aspects of system performance in addition to accuracy. In particular, the evaluations have the goal of identifying risks and harms of systems before such systems are deployed, and to define (and eventually create) data sets and evaluation infrastructure that will allow system builders to detect the extent to which their system exhibits those harms. Examples of NIST AI measurement and evaluation projects54 include: 





54

Biometrics: Over that past sixty years, NIST has been testing and evaluating biometric recognition technologies, including face recognition, fingerprint, biometric quality, iris recognition, and speaker recognition. Computer vision: NIST’s computer vision program includes several activities contributing to the development of technologies that extract information from image and video streams through systematic, targeted annual evaluations and metrology advances, including the Open Medica Forensics Challenge, Activities in Extended Video (ActEV), handwriting recognition and translation evaluation, and others. Information retrieval: The information retrieval research uses large, human-generated text, speech, and video files to create test collections through the Text Retrieval (TREC), TREC Video Retrieval Evaluation (TRECVID), and Text Analysis (TAC) Conferences. The Text Retrieval Conference is responsible for significant advancements in search technology. A 2010 NIST study55 estimated that without TREC, U.S. internet users would have spent

https://www.nist.gov/programs-projects/ai-measurement-and-evaluation/nist-ai-measurement -and-evaluation-projects. 55 https://trec.nist.gov/pubs/2010.economic.impact.pdf.

Trustworthy AI: Managing the Risks of Artificial Intelligence

95

an estimated 3.5 billion worth of additional hours using search engines between 1999 and 2009.

AI Standards NIST plays a critical role in the standards process as the nation’s measurement laboratory and has a unique role relating to standards in the Federal enterprise. Our coordination function, currently defined under the National Technology Transfer and Advancement Act and the NIST Organic Act, has yielded benefits to the nation ever since the Institute was established by Congress as the National Bureau of Standards in 1901. NIST’s strong ties to industry and the standards development community have enabled NIST to take on critical standards-related challenges and deliver timely and effective solutions. NIST works to support the development of AI standards that promote innovation and public trust in systems that use AI. Pursuant to U.S. Leadership in AI: A Plan for Federal Engagement in Developing Technical Standards and Related Tools56, NIST seeks to bolster AI standards-related knowledge, leadership, and coordination; conduct research to support development of technically sound standards for trustworthy AI; promote partnerships to develop and use standards; and engage internationally to advance AI standards. I serve as the Federal AI Standards Coordinator to work across the government and industry stakeholders to gather and share information on AI standards-related needs, strategies, and best practices. NIST facilitates federal agency coordination in the development and use of AI standards in part through the Interagency Committee on Standards Policy (ICSP) AI Standards Coordination Working Group57. This working group seeks to foster agency interest and participation in AI standards and conformity assessment activities, facilitate coordination of U.S. government positions on draft standards, identify effective means of coordinating with and contributing towards voluntary consensus bodies, align U.S. government activities with those of the private sector on AI standards development activities, promote effective and consistent federal policies leveraging AI

56

57

https://www.nist.gov/system/files/documents/2019/08/10/ai_standards_fedengagement _plan_9aug2019.pdf. https://www.nist.gov/standardsgov/icsp-ai-standards-coordination-working-group-aiscwgcharter.

96

Committee on Science, Space, and Technology

standards, and raise awareness of federal agencies’ use of AI that contributes to standards activities. NIST also engages internationally through bilateral and multilateral work on AI. The United States championed development of the first international principles for the responsible use of AI at the Organisation for Economic Cooperation and Development, or OECD. The U.S. also serves as a founding member of the Global Partnership on AI, which includes all members of the G7 and others such as Brazil and India, to coordinate R&D AI initiatives. NIST advances research on trustworthy AI with the Indo-Pacific Economic Framework. NIST supports the US-EU Trade and Technology Council (TTC) in building common approaches for trustworthy AI. Under the TTC, the U.S. and EU have launched a new AI sub-working group where NIST is working towards common frameworks for AI risk management and developing metrics and methodologies for measuring AI trustworthiness. And as mentioned above, the U.S. – led by NIST, NSF, and OSTP – is collaborating with the UK to develop prize challenges on advancing privacy-enhancing technologies.

Interagency Coordination NIST leads and participates in several federal AI policymaking efforts and engages with many other federal offices and interagency groups. This includes administering the National Artificial Intelligence Advisory Committee (NAIAC)58, on behalf of the Department of Commerce. The NAIAC is tasked with advising the President and the National AI Initiative Office. NIST supports the operation of this advisory committee. The Secretary of Commerce appointed the 27 members in April 2022. NAIAC held its first meeting in May 2022. Five working groups have been established to focus NAIAC’s work on leadership in trustworthy AI, leadership in research and development, supporting the U.S. workforce and providing opportunity, U.S. leadership and competitiveness, and international cooperation. NIST also co-chairs the National Science and Technology Council’s Machine Learning and Artificial Intelligence Subcommittee59, the Networking and Information Technology Research and Development’s (NITRD) AI

58

https://www.nist.gov/artificial-intelligence/national-artificial-intelligence-advisory-committee-naiac. 59 https://www.ai.gov/about/#MLAI-SC_Machine_Learning_and_AI_Subcommittee.

Trustworthy AI: Managing the Risks of Artificial Intelligence

97

Working group60, and the NITRD Fast Track Action Committee61 which is drafting a national strategy to advance privacy-preserving data sharing and analytics. NIST founded and is co-chairing the AI Standards Coordination Working Group (AISCWG) under the Interagency Committee on Standards Policy (ICSP). NIST's AI lead also serves as Federal AI Standards Coordinator and is a member of the National AI Research Resource Task Force62.

Conclusion Advancing artificial intelligence research and standards that contribute to a secure, private, interoperable, and world-leading digital economy is a top priority for NIST. Our economy is increasingly global, complex, and interconnected. It is characterized by rapid advances in technology. The timely availability of AI trustworthiness standards and guidance is a dynamic and critical challenge. Through robust collaboration with stakeholders across government, industry, and academia in the U.S. and elsewhere, NIST aims to cultivate trust and foster an environment that enables AI innovation on a global scale – and to do so in a way that respects and advances human rights. NIST’s team includes some of the top AI and standards experts in the world. This includes staff with multidisciplinary backgrounds in science and engineering. Working with our partners in other federal agencies, the private sector, academia, and other allied countries, and with the support of Congress, we will work tirelessly to address current and future challenges. Thank you for the opportunity to present on NIST activities to improve AI trustworthiness. I look forward to your questions.

Elham Tabassi (Fed), Chief of Staff, Information Technology Laboratory Elham Tabassi is the Chief of Staff in the Information Technology Laboratory (ITL) at the National Institute of Standards and Technology (NIST). She leads NIST Trustworthy and Responsible AI program that aims to cultivate trust in the design, development, and use of AI technologies by improving 60

https://www.ai.gov/a-new-nitrd-iwg-for-artificial-intelligence-ai-rd/. https://www.nitrd.gov/coordination-areas/privacy-rd/appdsa/. 62 https://www.ai.gov/naiac/. 61

98

Committee on Science, Space, and Technology

measurement science, standards, and related tools in ways that enhance economic security and improve quality of life. She has been working on various machine learning and computer vision research projects with applications in biometrics evaluation and standards since she joined NIST in 1999. She is the principal architect of NIST Fingerprint Image Quality (NFIQ) which is now an international standard for measuring fingerprint image quality and has been deployed in many large scale biometric applications worldwide. She is a member of the National AI Resource Research Task Force, a senior member of IEEE, and a fellow of Washington Academy of Sciences.

PUBLICATIONS NIST Fingerprint Image Quality 2 JULY 13, 2021 AUTHOR(S) ELHAM TABASSI, MARTIN OLSEN, OLIVER BAUSINGER, CHRISTOPH BUSCH, ANDREW FIGLARZ, GREGORY FIUMARA, OLAF HENNIGER, JOHANNES MERKLE, TIMO RUHLAND, CHRISTOPHER SCHIEL, MICHAEL SCHWAIGER

Trustworthy AI: Managing the Risks of Artificial Intelligence

99

NIST Fingerprint Image Quality (NFIQ 2) is open source software that links image quality of optical and ink 500 pixel per inch fingerprints to operational NIST Special Database 302: Nail to Nail Fingerprint Challenge DECEMBER 11, 2019 AUTHOR(S) GREGORY P. FIUMARA, PATRICIA A. FLANAGAN, JOHN D. GRANTHAM, KENNETH KO, KAREN MARSHALL, MATTHEW SCHWARZ, ELHAM TABASSI, BRYAN WOODGATE, CHRISTOPHER BOEHNEN In September 2017, the Intelligence Advanced Research Projects Activity (IARPA) held a data collection as part of its Nail to Nail (N2N) Fingerprint Challenge Nail to Nail Fingerprint Challenge: Enrollment Set Size Variability JUNE 24, 2019 AUTHOR(S) GREGORY P. FIUMARA, KENNETH KO, ELHAM TABASSI, PATRICIA A. FLANAGAN, JOHN D. GRANTHAM, KAREN MARSHALL, MATTHEW SCHWARZ, BRYAN WOODGATE In September 2017, the Intelligence Advanced Research Projects Activity held a fingerprint data collection as part of the Nail to Nail Fingerprint Challenge NIST Special Database 301: Nail to Nail Fingerprint Challenge Dry Run JULY 11, 2018 AUTHOR(S) GREGORY P. FIUMARA, PATRICIA A. FLANAGAN, MATTHEW SCHWARZ, ELHAM TABASSI, CHRISTOPHER BOEHNEN In April 2017, the Intelligence Advanced Research Projects Activity (IARPA) held a dry run for the data collection portion of its Nail to Nail (N2N) Fingerprint Nail to Nail Fingerprint Challenge: Prize Analysis MAY 3, 2018 AUTHOR(S) GREGORY P. FIUMARA, ELHAM TABASSI, PATRICIA A. FLANAGAN, JOHN D. GRANTHAM, KENNETH KO, KAREN

100

Committee on Science, Space, and Technology

MARSHALL, MATTHEW SCHWARZ, BRYAN WOODGATE, CHRISTOPHER BOEHNEN In September 2017, the Intelligence Advanced Research Projects Activity held a fingerprint data collection as part of the Nail to Nail Fingerprint Challenge

*** Chairwoman STEVENS. Dr. Isbell.

Testimony of Dr. Charles Isbell, Dean and John P. Imlay, Jr. Chair of the College of Computing, Georgia Institute of Technology Dr. ISBELL. Thank you, Subcommittee Chair Stevens, Ranking Members Feenstra and Lucas, and distinguished Members of the Subcommittee. I’m Charles Isbell. I’m a Professor in and Dean for the College of Computing at Georgia Tech. Thank you for the opportunity to be here today. So by way of explaining my background, let me note that while I tend to focus on statistical machine learning, my research passion is actually interactive artificial intelligence. As noted at the top of the hearing, there, the fundamental research goal is to understand how to build autonomous agents who must live and interact with large numbers of other intelligent agents, some of whom may be human. But I’m also an educator. As such, I spend much of my energy focusing on providing access to all those who wish to be a part of this ongoing conversation around the role of AI and computing in our lives. My discussion today and answers to your questions you ask will be informed by both my research and educator selves. So let us begin this discussion by defining our terms. There are many potential definitions of AI. My favorite one is that it is the art and science of making computers act the way they do in the movies. In the movies, computers are often semi-magical and anthropomorphic. They do things that if humans did them, we would say they required intelligence. This definition is borne out in our use of AI in the everyday world. We use the infrastructure of AI to search billions upon billions of documents to find the answers to a staggering variety of questions, often expressed literally

Trustworthy AI: Managing the Risks of Artificial Intelligence

101

as questions. We use automatically tagged images to organize our photos. And we use that same infrastructure to plan optimal routes for trips, even altering our routes on the fly in the face of changes in traffic. In fact, we let our cars mostly drive themselves in that very same traffic playing the role of a tireless chauffeur. As noted by the Chair, we’re able to automatically detect tumors from Xrays, even those that are trained—that trained doctors find difficult to see. We let computers finish our sentences as we type text and use search engines, sometimes facilitating a subtle shift from prediction of our behavior to influence over our behavior. Often, we take advantage of these services by using our phones to interpret a wide variety of spoken commands. So in some very important sense, AI already exists. It is not the AI of fanciful science fiction, neither benevolent intelligence working with humans as we traverse the galaxy, nor malevolent AI that seeks humanity’s destruction. Nonetheless, we are living every day with machines who make decisions that if humans made them, we would attribute to intelligence. And the machines often make those decisions faster, and some might argue better, than humans would. Yet like all computing systems, at bottom, AI simply makes us more efficient. It amplifies our ability to make decisions, including bad ones, all too often automating the biases baked into our data and that of its developers. By way of example, according to the Marshall Project, most States use some form of automated risk assessment at some stage in the criminal justice system. We set out to predict recidivism as if that means the chance of committing a crime again, when in fact, what we’re actually predicting is the chance of being arrested and convicted again. As with the shift from predicting behavior to influencing it, this distinction is subtle, but important. Without recognition of the difference, one can create a feedback loop and make things worse, without even noticing it. Although we sometimes act as if the machine is doing the work, it is worth noting that these machines are making decisions with us, with humans. They are partners, and as with any partner, it is important that we understand what our partner is doing and why. To make AI trustworthy, we need a more informed citizenry, something we can accomplish by requiring that our AI partners are more transparent on the one hand, but that we are more savvy on the other. So speaking of definitions, by transparency, I mean that an AI algorithm should be inspectable, that the kind of data the algorithm uses to build its model should be available, and the decisions that such algorithms make should

102

Committee on Science, Space, and Technology

be understandable. In other words, as we deploy these algorithms, each algorithm should be able to explain its output. “This applicant was assigned this score because” is more useful and less prone to misuse than just “This applicant was assigned this score.” But to really understand such machines, much less to create them, we should strive for all of our citizens to not only be literate, but to be competent. That is, they must understand computing and computational thinking and how it fits into problem solving in their everyday lives. In the long term, one of the key solutions to AI bias will be bringing a wider group of people into computing education and into machine learning more specifically. We have to improve the number and the diversity of those entering the field and participating in and influencing the conversation because it is the right thing to do, but also because it is the only way for us to compete. It should not be lost that putting these two thoughts together suggests that the process by which we build AI algorithms is a shared effort that requires a wide swath of citizens to be informed and engaged and for developers to accept the responsibility for including the users of and sometimes targets of those systems in the development process itself. As a field, we have not caught up to the reality of the responsibility that we hold, and it is something that we simply must do. We must move from tool sets and skill sets to mindsets, incorporating responsibility in all that we do from the ground up. I’m very excited for this hearing. I think advances in AI are essential to our economic and social future. These are all areas in which funding—the funding power of the National Science Foundation and NIST as well can make a huge difference. So thank you very much, and I look forward to your questions. [The prepared statement of Dr. Isbell follows:] *** Subcommittee Chair Stevens, Subcommittee Ranking Member Feenstra, Committee Chair Johnson, Ranking Member Lucas, and distinguished members of the subcommittee, my name is Dr. Charles Isbell and I am a Professor in and Dean for the College of Computing at Georgia Tech. Thank you for the opportunity to appear before this Subcommittee to discuss: 1. The importance of a culture of responsibility around artificial intelligence (AI) systems.

Trustworthy AI: Managing the Risks of Artificial Intelligence

103

2. The need for transparency in AI systems in order to identify harmful bias. 3. Mitigation of the risks in AI. By way of explaining my background, let me note that while I tend to focus on statistical machine learning, my research passion is actually artificial intelligence. I like to build large integrated systems, so I also tend to spend a great deal of my time doing research on autonomous agents, interactive entertainment, some aspects of human-computer interaction, software engineering, and even programming languages I think of my field as interactive artificial intelligence. My fundamental research goal is to understand how to build autonomous agents that must live and interact with large numbers of other intelligent agents, some of whom may be human. Progress towards this goal means that we can build artificial systems that work with humans to accomplish tasks more effectively; can respond more robustly to changes in environment, relationships, and goals; and can better co-exist with humans as long-lived partners. As the members of this Subcommittee well know, there has been an explosion in the development and deployment of what we might call AI technology. With that explosion has come a corresponding explosion in interest in AI. In any discussion—particularly technical ones—it helps to define our terms. There are many potential definitions of AI. My favorite one is that it is “the art and science of making computers act like they do in the movies.” In the movies, computers are oSen semi-magical and anthropomorphic; they do things that, if humans did them, we would say they required intelligence. This definition is borne out in our use of AI in the everyday world. We use the infrastructure of AI to search billions upon billions of documents to find the answers to a staggering variety of questions—often expressed literally as questions. We use automatically tagged images to organize our photos, and we use that same infrastructure to plan optimal routes for trips—even altering our routes on-the-fly in the face of changes in traffic. We are able to automatically detect tumors from x-rays, even those that trained doctors find difficult to see. We let computers finish our sentences as we type texts and use search engines, sometimes facilitating a subtle shift from prediction of our behavior to influence over our behavior. Often we take advantage of these services by using our phones (our phones!) to interpret a wide variety of spoken commands.

104

Committee on Science, Space, and Technology

So, in some very important sense, AI already exists. It is not the AI of science fiction, neither benevolent intelligences working with humans as we traverse the galaxy, nor malevolent AI that seeks humanity’s destruction. Nonetheless, we are living every day with machines that make decisions that, if humans made them, we would attribute to intelligence. And the machines oSen make those decisions faster and better than humans would. Importantly, each of the examples we consider above is a distinctly human-centered problem. It is human-centered both in the sense that these systems are trying to solve problems that humans deal with every day— question answering, symptom evaluation, navigation—but also humancentered in the sense that humans have or currently perform some of those tasks. Presumably, these developments are all to the good. We are living up to the promise of technology that allows us to automate away work that is dirty, dangerous, or dull, freeing up human capital to be more productive, and, hopefully, for humans to be more fulfilled. The social and economic benefits are potentially immense. There are also some reasons for concern. Those who work in the field will tell you that very oSen they aren’t sure exactly how their algorithms reach the correct answer, only that they do. AI scientists describe these algorithms as “black box models.” The second concern is that sometimes those algorithms reach the wrong conclusion, and in a way that harms people and society. Artificial intelligence has all too oSen automated the biases of its programmers, or baked into its data. As a result, AI products have already been caught making biased decisions in banking, hiring, health care and criminal justice. For example, according to the Marshall Project, almost every state uses some form of “risk assessment” at some stage in the criminal justice system. Risk assessments have existed in various forms for a century, but over the past two decades, they have spread through the American justice system, driven by advances in social science. The tools try to predict recidivism — repeat offending or breaking the rules of probation or parole — using statistical probabilities based on factors such as age, employment history, and prior criminal record. They are now used at some stage of the criminal justice process in nearly every state. Many court systems use the tools to guide decisions about which prisoners to release on parole, for example, and risk assessments are becoming increasingly popular as a way to help set bail for inmates awaiting trial.

Trustworthy AI: Managing the Risks of Artificial Intelligence

105

This automated process relies on an algorithm in lieu of a judge’s discretion. As noted by Cathy O’Neil, author of Weapons of Math Destruc1on, the data used by these algorithms to build models are sometimes suspect. Worse, we treat the output as “objective” without understanding that the data are themselves not objective. In this particular case, we set out to predict recidivism as if that means the chance of committing a crime again when in fact we are predicting the chance of being arrested and convicted again. It does not take much imagination to see how being from a heavily policed area raises the chances of being arrested again, being convicted again, and in aggregate leads to even more policing of the same areas, creating a feedback loop. One can imagine similar issues with determining fit for a job, or creditworthiness, or even face recognition and automated driving. In computing, we call this garbage-in-garbage-out: an algorithm is only as good as its data. This saying is certainly true, and especially relevant for AI algorithms that learn based on the data they are given. Luckily, one way to address these issues is straighcorward: to increase transparency. The kind of data the algorithm uses to build its model should be available. The decisions that such algorithms make should be inspectable. In other words, as we deploy these algorithms, each algorithm should be able to explain its output. “This applicant was assigned high risk because…” is more useful than, “This applicant was assigned high risk.” If algorithms are inspectable, their creators are then able to call in outside experts to inspect them. ASer all, those with the knowledge to design an artificial intelligence algorithm can’t be expected to also be experts in medicine, the law, criminal justice, or banking. And outside experts shouldn’t have to get a Ph.D. in computer science to understand what programmers are doing with their data and their theories. AI transparency allows for a much wider range of input into any given project. And when things go wrong, it shows exactly where and how. The idea of AI transparency is straightforward, but its implementation will be more complicated. First, the complexity of the algorithms makes it impractical for humans to inspect them manually. We will need tools that translate the complexity of AI algorithms into useable human-scaled insights. Second, researchers have demonstrated that the more transparent an AI is, the easier it is to hack. Or worse still, if the AI is a trade secret, the easier it is to replicate. Therefore, we will also need new tools to secure every part of the programming and training process from unwanted intruders. This does not mean that transparent AI is impossible, just that it presents a series of important technical challenges. But we must also recognize that

106

Committee on Science, Space, and Technology

transparency isn’t the only measure we can and should be taking to make AI responsible. We also have the responsibility to consider the data sets that are used to train these algorithms. As shown in the earlier example about risk assessment for parolees, sometimes the data is skewed by the method that was used to collect it. This is a common problem in algorithms trained on social media data, to give another example. Sometimes, the data set simply doesn’t contain enough information about underrepresented groups to even recognize them as a group. If that is the case, the data set can be expanded to include more information about those groups. Alternatively, they can add another “learner” program to the AI that focuses on identifying those groups. This in and of itself presents a considerable challenge, however, because it suggests that the only way to make systems more responsible is to make them more complicated. To solve that problem, we need new concepts in computing theory to help us organize responsible AIs more efficiently. There is precedent for putting practice before theory; people wrote in code for thousands of years before the theory underlying modern public-key cryptography was laid out in the 1970s. These technical problems present some of the major research challenges in artificial intelligence today. The National Institute of Standards and Technology’s ongoing effort to create an AI risk management framework will need to incorporate these technical questions and others. There are, of course, human issues as well. Right now, about 66 percent of tech workers are white, and 20 percent are Asian. Roughly 75 percent are men. Now, I work in AI, and I am not alleging that my colleagues are racist or misogynist. I am pointing out, however, that people from a subset of the population often build products that affect everyone. And often, they don’t realize they’ remissing valuable perspectives. In the long term, one of the key solutions to AI bias will be bringing a wider group of people into computing education, and into machine learning more specifically. We need to improve both the number and the diversity of people entering the field, starting from K-12 and extending to post-graduate work. One major obstacle is a lack of instructors at every level. In my own state, Georgia, only 35 percent of high schools that have AP programs offer AP Computer Science. Now, K-12 isn’t the only place for intervention, and programming is not the only job in artificial intelligence. In my own college, our DataWorks program trains unemployed adults to clean and integrate data sets for use in

Trustworthy AI: Managing the Risks of Artificial Intelligence

107

artificial intelligence projects. There are opportunities to open AI careers to more communities at every point in the pipeline. While technical solutions are important, as are diversity and equity, a larger culture change is also needed. Computing has long been an intellectual Wild West, where things changed so fast that the priority was always to find the next, better soluton. Now, we have succeeded in finding solutions so good that they are entwined in nearly every area of our personal lives and communities. We have not as a field caught up to the reality of that responsibility. Unlike engineers or lawyers or medical professionals, we have not built responsibility for our actions into the structure of our field. We do of course have scholars specializing in ethical concerns. At Tech, that includes everything from autonomous robots in warfare to the relationship between software design and misinformation on social media. I am not simply talking about ethics, or bias, or privacy, however, but instead a larger sense that computer scientists are responsible for how their products can be used or even abused. Our philosophy must catch up to the reality of our influence. In conclusion, I am excited by this hearing. Advances in AI are central to our economic and social future. The issues are being raised here can be addressed with thoughcul support for robust funding in basic research in artificial intelligence—including research in AI transparency and new concepts in computing theory; support for AI education throughout the pipeline; and in developing standards for the responsible use of intelligent systems. These are all areas in which the funding power of the National Science Foundation and the National Institute of Standards and Technology can make a big difference. I thank you very much for your time and attention today. I look forward to working with you in your efforts to understand how we can best develop these technologies to create a future where we are partners with intelligent machines. Thank you. This concludes my testimony. *** Dr. Charles Lee Isbell, Jr. received his B.S. in CS from the GeorgiaTech and his Ph.D. in CS from MIT. After four years at AT&T Labs/Research, he returned to Georgia Tech to join the faculty of the College of Computing. Charles' research interests are varied, but he is at heart a machine learning and

108

Committee on Science, Space, and Technology

artificial intelligence researcher. His recent work centers on building autonomous agents who engage in life-long learning when in the presence of thousands of other intelligent agents, including humans. Being human-centric, he finds himself studying the effects of AI bias. He and his work have been featured in the popular media as well as in technical collections. Charles also pursues reform in computing education focusing on broadening participation and access. He is an elected fellow of AAAI, ACM, and the American Academy of Arts and Sciences. In 2019, he assumed the role of the John P. Imlay, Jr. Dean for the College. *** Chairwoman STEVENS. OK, Georgia Tech, you convinced me. I’m signing up for his class. Dr. ISBELL. Done. Chairwoman STEVENS. All right. With that, we’re going to hear from Mr. Crenshaw for 5 minutes. Thanks.

Testimony of Mr. Jordan Crenshaw, Vice President of the Chamber Technology Engagement Center, U.S. Chamber of Commerce Mr. CRENSHAW. Thank you, Chair Stevens, Ranking Members Feenstra and Lucas, and Members of the Research and Technology Subcommittee. Good morning, and thank you. My name is Jordan Crenshaw, and I’m the vice president of the U.S. Chamber of Commerce’s Technology Engagement Center. It’s my pleasure to talk to you today about how we—business, government, and citizens—can work together to build trustworthy artificial intelligence. AI is changing the world as we know it. By 2030, AI will have a $16 trillion impact on the global economy. But from a practical level, what does that mean? AI is helping forecasters and emergency management better track the intensification of hurricanes and chart out evacuation and emergency preparedness. It’s allowing researchers to more easily pinpoint virus mutations and tailor vaccines for new variants. It’s also bolstering our cyber defenses against an evolving digital threat landscape. And finally, AI has the potential to fill the gaps where we have worker shortages, like patient monitoring where

Trustworthy AI: Managing the Risks of Artificial Intelligence

109

we have nursing shortages, and help tackle supply chain issues where we have a lack of available truckers. The United States is not operating in a vacuum. Its strategic competitors also realize the benefits of this crucial technology. For example, prior to the invasion of Ukraine, China and Russia agreed to cooperate on developing emerging technologies, specifically noting artificial intelligence. When it comes to AI, we are in a race we must win. AI is here now, and it’s not going away. We cannot ignore it, and we cannot afford to sit on the sidelines and allow those who do not share our democratic values to set the standard for the world. For the research and deployment of AI to be successful, Americans must have trust in the technology. And while AI has many benefits, as I previously mentioned, in the wrong hands like those of our adversaries, there could be harms. Americans are united in the belief that we must beat our competitors as well. In fact, according to polling by the U.S. Chamber of Commerce, 85 percent of Americans believe the United States should lead in AI, and nearly that same number believes that we are best positioned as a nation to develop those ethical standards for its use. We agree. It’s why the Chamber earlier this year established its Commission on AI Competitiveness, Inclusion, and Innovation, led by your former congressional colleagues, Representatives John Delaney and Mike Ferguson, and it’s comprised of experts in business, academia, and civil society. The Commission has been tasked with developing policy recommendations in three core areas: trustworthiness, work force preparation, and international competitiveness. Our Commission held field hearings in Austin, Silicon Valley, Cleveland, London, and here in D.C. And we’ve heard from a variety of stakeholders and look forward to presenting you with our recommendations early next year. In the meantime, while we wait for the Commission to finalize its report, we offer the following observations about what it will take to maintain trustworthy AI leadership. The Federal Government has a significant role to play in conducting fundamental research in trustworthy AI. The Chamber was pleased to see passage of the CHIPS and Science Act and hopes to see the necessary appropriations to carry out the science provisions. We encourage continued investment in STEM (science, technology, engineering, and mathematics) education. We need a trained, skilled, and diverse work force that can bring together multiple voices for coding and developing systems. AI is only as good, though, as the data it uses. That is why it is key that both government and the private sector team up to ensure there is quality data

110

Committee on Science, Space, and Technology

for more accurate and trustworthy AI. Governments should prioritize improving access to its own data and models and ways that respect individual privacy. At the same time, while we talk about privacy, as Congress looks to address these types of issues, it’s important that we look at issues to determine whether or not we inhibit the collection of sensitive data and other types of data that could inhibit deploying trustworthy AI systems. Fourth, we need to increase widespread access to shared computing resources. However, many small startups and academic institutions lack sufficient computing resources to help develop solutions to artificial intelligence. That’s why Congress took the critical step of establishing the Research—passing the Resource Task Force Act of 2020. Now the National Science Foundation and the White House’s Office of Science and Technology Policy should fully implement the law and expeditiously develop a roadmap to unlock AI innovation across multiple stakeholders. Finally, we also are encouraged and are thankful for the work by NIST in its development of the AI Risk Management Framework, which is a consensus-driven, cross-sector, and voluntary framework to leverage best practices. These recommendations are only the beginning. And I thank you for your time to address how the business community can partner with you to maintain trustworthy AI leadership. We thank you for your leadership, and I look forward to your questions. [The prepared statement of Mr. Crenshaw follows:]

Before the U.S. House Research And Technology Subcommittee, Hearing on “Trustworthy AI: Managing the Risks of Artificial Intelligence,” Testimony of Jordan Crenshaw, Vice President, C_TEC, U.S. Chamber of Commerce, September 29, 2022 Dear Chairman Stevens, Ranking Member Feenstra, and distinguished Research and Technology Committee members. First, thank you for your invitation to come before you today to testify. My name is Jordan Crenshaw, and I am honored to serve as the Vice President of the U.S. Chamber Technology Engagement Center (C_TEC) at the U.S. Chamber of Commerce. C_TEC is the technology hub within the U.S. Chamber, and our goal is to promote the role of technology in our economy and advocate for rational

Trustworthy AI: Managing the Risks of Artificial Intelligence

111

policy solutions that drive economic growth, spur innovation, and create jobs. Today's hearing titled "Trustworthy AI: Managing the Risks of Artificial Intelligence" is a timely and critical discussion, and the Chamber appreciates the opportunity to participate. The world has quickly entered its fourth industrial revolution, in which the use of technology and artificial intelligence ('AI') is helping propel humanity. However, we are witnessing the benefits of using AI daily, from its value in adapting vaccines to tailor them to new variants to increasing patient safety during procedures like labor and delivery.63 Artificial intelligence is also rapidly changing how businesses operate. This emerging technology is a tremendous force for good in its ability to secure our networks, expand opportunities for the underserved, and make our communities safer and more prosperous.64 America is currently in a race with countries like China to lead in Artificial Intelligence.65 America’s competitors may not respect the same values as our allies, such as individual liberties, privacy, and the rule of law. While the development and deployment of AI have become an essential part of facilitating innovation, this innovation will never reach its full potential and enable the United States to compete without trust. The business community understands that fostering this trust in AI technologies is essential to advance its responsible development, deployment, and use. This has been a core understanding of the U.S. Chamber, as it is the first principle within the 2019 “U.S. Chamber’s Artificial Intelligence Principles: Trustworthy AI encompasses values such as transparency, explainability, fairness, and accountability. The speed and complexity of technological change, however, mean that governments alone cannot promote trustworthy AI. The Chamber believes that governments must partner with the private sector, academia, and civil society when addressing issues of public concern associated with AI. We recognize and commend existing partnerships that have formed in the AI community to address these challenges, including protecting against harmful biases, ensuring democratic values, and respecting human rights. Finally, any governance frameworks should be

63

64

65

https://www.5newsonline.com/article/news/health/northwest-health-introducing-newtechnology-to-enhance-maternal-and-fetal-safety/527-9c173d18-c56e-457b-831762ebaae93558. https://americaninnovators.com/research/data-for-good-promoting-safety-health-andinclusion/. https://www.washingtonpost.com/opinions/2022/09/13/artificial-intelligence-ai-high-techrace-with-china/.

112

Committee on Science, Space, and Technology flexible and driven by a transparent, voluntary, and multi-stakeholder process.66

AI also brings a unique set of challenges that should be addressed so that concerns over its risks do not dampen innovation and to help ensure the United States can lead globally in trustworthy AI. The U.S. Chamber of Commerce’s Technology Engagement Center (C_TEC) shares the perspective with many of the leading government and industry voices, including the National Security Commission on Artificial Intelligence (NSCAI)67, the National Institute of Standards and Technology (NIST)68, that government policy to advance the ethical development of AI-based systems, sometimes called “responsible” or “trustworthy” AI, can enable future innovation and help the United States to be the global leader in AI. This is why we have prioritized the need to build public trust in AI through our continued efforts. The U.S. Chamber earlier this year launched its Artificial Intelligence (AI) Commission on Competition, Inclusion, and Innovation to advance U.S. leadership in using and regulating AI technology.69 The Commission, led by co-chairs former Congressmen John Delaney and Mike Ferguson, is composed of representatives from industry, academia, and civil society to provide independent, bipartisan recommendations to aid policymakers with guidance on artificial intelligence policies as it relates to regulation, international research, development competitiveness, and future jobs. Over the past few months, the Commission has heard oral testimony from 87 expert witnesses70 over five separate field hearings. The Commission heard from individuals such as Jacob Snow, Staff Attorney for the Technology & Civil Liberties Program at the ACLU of Northern California. In his testimony, he told the Commission that the critical discussions on AI are “not narrow technical questions about how to design a product. They are social questions about what happens when a product is deployed to a society, and the consequences of that deployment on people’s lives.”71

66

https://www.uschamber.com/technology/us-chamber-releases-artificial-intelligenceprinciples. 67 https://www.nscai.gov/wp-content/uploads/2021/03/Full-Report-Digital-1.pdf. 68 https://www.nist.gov/artificial-intelligence. 69 www.americaninnovators.com/aicommission. 70 https://americaninnovators.com/aicommission/. 71 https://americaninnovators.com/news/ai-for-all-experts-weigh-in-on-expanding-ais-sharedprosperity-and-reducing-potential-harms/.

Trustworthy AI: Managing the Risks of Artificial Intelligence

113

Doug Bloch, Political Director at Teamsters Joint Council 7, referenced his time serving on Governor Newsom’s Future of Work Commission: “I became convinced that all the talk of the robot apocalypse and robots coming to take workers’ jobs was a lot of hyperbole. I think the bigger threat to the workers I represent is the robots will come and supervise through algorithms and artificial intelligence.”72 Miriam Vogel, President and CEO of EqualAI and Chair of NAIAC, also addressed the Commission. She stated "I would argue that it’s not that we need to be a leader, it’s that we need to maintain our leadership because our brand is trust.” The Commission also received written feedback from stakeholders answering numerous questions that the Commission has posed in three separate requests for information (RFI), which asked questions about issues ranging from defining AI, balancing fairness and innovation,73 and AI’s impact on the workforce.74 These requests for information outline many of the fundamental questions that we look to address in the Commission’s final recommendations, which will help government officials, agencies, and the business community. The Commission is diligently working on its recommendations and will look to release them earlier next year. While the Chamber is diligently taking a leading role within the business community to address many of the concerns which continue to be barriers to public trust and consumer confidence in the technology, my testimony before you today will look to address the following underlying questions:   

72

What are the opportunities for the federal government and industry to work together to develop trustworthy AI? How are different industry sectors currently mitigating risks that arise from AI? How can the United States encourage more organizations to think critically about risks that arise from AI systems, including ways in which we prioritize trustworthy AI from the earliest stages of development of new systems?

https://americaninnovators.com/news/ai-for-all-experts-weigh-in-on-expanding-ais-sharedprosperity-and-reducing-potential-harms/. 73 https://americaninnovators.com/wp-content/uploads/2022/04/CTEC_RFI-AIcommission_ 2.pdf?utm_source=sfmc&utm_medium=email&utm_campaign=&utm_term=RFI+3++Workforce+-+20220518&utm_content=5/19/2022. 74 https://uschambermx.iad1.qualtrics.com/jfe/form/SV_cMw5ieLrlsFwUPs.

114

Committee on Science, Space, and Technology



How can the federal government strengthen its role in the development and responsible deployment of trustworthy AI systems?

Opportunities for the Federal Government and Industry to Work Together to Develop Trustworthy AI Congress Needs to Pass a Preemptive National Data Privacy Law Artificial Intelligence relies upon the data in which it is provided. Particularly sensitive data can be used to determine whether AI systems operate fairly. Many underlying concerns regarding the use of Artificial Intelligence will need to be reassessed should a National Data Privacy bill be signed into law as new well-defined rules are put into place. The U.S. Chamber has been at the forefront of advocating for a true national privacy standard that gives strong data protections for all Americans equally. For this reason, the Chamber was the first trade association after the passage of the California Consumer Privacy Act to formalize and propose privacy principles and model legislation.75 Most central to a national privacy law is the need for true preemption that creates a national standard. A patchwork of fifty different state laws76 would eliminate the certainty required for data subjects and businesses in compliance and operations. According to a recent report from ITI, a fifty-state patchwork of comprehensive privacy laws could cost the economy $1 trillion and $200 million for small businesses.77 Recently, the Chamber released findings that nearly 25 percent of small businesses plan to use artificial intelligence and 80 percent of these businesses believe limiting access to data would harm their operations.78 A state patchwork exacerbates the difficulties these businesses face. Recently, the House Energy and Commerce Committee reported the American Data Privacy and Protection Act (“ADPPA”).79 Although the ADPPA has many laudable consumer protections like the right to delete, opt 75

https://www.uschamber.com/technology/data-privacy/the-10-principles-of-data-privacy. https://americaninnovators.com/wp-content/uploads/2022/01/CTEC_Privacy2022_HeatMap1024x791-1.pdf. 77 https://itif.org/publications/2022/01/24/50-state-patchwork-privacy-laws-could-cost-1trillion-more-single-federal/. 78 https://americaninnovators.com/wp-content/uploads/2022/08/Empowering-Small-BusinessThe-Impact-of-Technology-on-U.S.-Small-Business.pdf. 79 https://docs.house.gov/meetings/IF/IF00/20220720/115041/BILLS-117-8152-P000034Amdt-1.pdf. 76

Trustworthy AI: Managing the Risks of Artificial Intelligence

115

out of targeted advertising, as well as data correction and access, there are significant concerns that it could create a new national patchwork and cut off access to data which could improve AI fairness.80 For example, the bill would only preempt what is covered by the Act and would empower the FTC to bar the collection and use of data. We encourage stakeholders and Congress to work together to pass a truly preemptive privacy law that enables the use of data to improve AI—not inhibit the deployment of AI.

Support for Alternative Regulatory Pathways Such as Voluntary Consensus Standards New regulation is not always the answer for emerging or disruptive technologies. Non-regulatory approaches can often serve as effective tools to increase safety and build trust, and allow for flexibility and innovation. This is particularly applicable to emerging technologies such as artificial intelligence as the technology continues to rapidly evolve. This is why the Chamber supports the National Institutes of Science and Technology’s (NIST) work in drafting the Artificial Intelligence Risk Management Framework (AI_RMF). The AI RMF is meant to be a stakeholder-driven framework, which is “intended for voluntary use and to improve the ability to incorporate trustworthiness considerations into the design, development, use, and evaluation of AI products, services, and systems.” Another example of non-regulation tools is the National Highway Traffic Safety Administration’s (“NHTSA”) Voluntary Safety Self-Assessments (“VSSA”). More than two dozen AV developers have submitted a VSSA to NHTSA, which have provided essential and valuable information to the public and NHTSA on how developers are addressing safety concerns arising from AVs. The flexibility provided by VSSAs, complemented by existing regulatory mechanisms, provides significant transparency into the activities of developers without compromising safety. Voluntary tools provide significant opportunities for consumers, businesses, and the government to work together to address many of the underlying concerns with emerging technology while at the same time providing the necessary flexibility to allow the standards not to stifle innovation. These standards are pivotal in the United States’ ability to 80

https://www.uschamber.com/technology/data-privacy/what-should-and-should-not-beincluded-in-a-national-privacy-bill.

116

Committee on Science, Space, and Technology

maintain leadership in emerging technology as it is critical to ensuring our global economic competitiveness in this cutting-edge technology.

Stakeholder Driven Engagement The U.S. Chamber of Commerce stands by and is ready to assist the government in any opportunity to improve consumer confidence and trust in AI systems. We have always viewed trust as a partnership, and only when government and industry work side by side can that trust be built. The opportunities to facilitate this work are great, but there are essential steps that industry and government can make today. We asked the American public earlier this year about their perception of artificial intelligence. The polling results were very eye-opening, as there was a significant correlation between the trust and acceptance of AI and an individual’s knowledge and understanding of the technology.81 To build the necessary consumer confidence to allow artificial intelligence to grow for the betterment of all, all opportunities must be taken advantage of for industry and governments to work together in educating stakeholders about the technology. This is why we appreciate the National Institute of Science and Technology’s (NIST) work in drafting the Artificial Intelligence Risk Management Framework (AI_RMF). The AI RMF is meant to be a stakeholder-driven framework. NIST’s continued engagement with all stakeholders in the development of the framework is important to develop trust between government and industry. To date, NIST actions include two workshops, with a third workshop scheduled for next month. They have also included three engagement opportunities for stakeholders to provide written feedback on the development, direction, and critique of the AI RMF. This engagement by NIST has allowed for the development of trust between industry and the federal government. While we implore NIST and their action on the RMF, it’s prudent to highlight that NIST is only one entity within the federal government and that other agencies and regulators should look to the model. Awareness of the Benefits of Artificial Intelligence At the same time, it is critical that federal agencies do not seek to prescriptively regulate technologies without first establishing a strong public record. The business community has significant concerns about the Federal Trade 81

https://americaninnovators.com/wp-content/uploads/2022/01/CTEC-US-Outlook-on-AIDetailed-Analysis.pdf.

Trustworthy AI: Managing the Risks of Artificial Intelligence

117

Commission undertaking rulemaking on privacy, security, and algorithms asking whether it should make economy-wide rules on algorithmic decision systems.82 First and foremost, the FTC should enable NIST’s process to conclude and let Congress speak clearly about how it wants to make policy in artificial intelligence before undertaking general rulemaking. “NIST contributes to the research, standards, and data required to realize the full promise of artificial intelligence (AI) as a tool that will enable American innovation, enhance economic security and improve our quality of life.”83 Therefore, we believe it is essential for NIST to be able to finish the RMF to provide the necessary robust record within the federal government. This is vital, as government agencies such as the FTC ask technical questions.

Awareness of the Benefits of Artificial Intelligence Another excellent opportunity for industry and government to work together is highlighting the benefits and efficiencies of the use of technology within the government. The government’s utilization of AI has the ability to lead to medical breakthroughs84 to help to predict risk for housing and food insecurities.85 AI is helping our government provide better assistance to the American public, and is becoming a vital tool. The development of these resources does not come in a vacuum, and the majority of these tools are done so in partnership with industry. Highlighting these workstreams and the benefits that they deliver for the American public can assist in fostering trust in technology, as well as build overall consumer confidence in technology use outside of government. However, this would also require a foundational change in how our government works, which includes addressing the “legacy culture” that has stifled the necessary investment and buildout of 21st-century technology solutions and harnessing data analytics. Congress’s passage of the Modernizing Government Technology Act during the 115th Congress was an essential first step in rectifying decades of needed investment. However, this legislation, while important, will not alone fix the problem and would ask

82

https://thehill.com/opinion/technology/3621149-the-ftc-needs-a-reminder-that-its-aregulator-not-a-legislator/. 83 https://www.nist.gov/artificial-intelligence. 84 https://www2.deloitte.com/content/dam/Deloitte/us/Documents/deloitte-analytics/us-aiinstitute-goverment-public-services-dossier.pdf. 85 https://www2.deloitte.com/content/dam/Deloitte/us/Documents/deloitte-analytics/us-aiinstitute-goverment-public-services-dossier.pdf.

118

Committee on Science, Space, and Technology

Congress to continue to do necessary oversight within the federal government IT sector so that the essential and sustained investments can be made.

How Are Different Sectors Adopting Governance Models and Other Strategies to Mitigate Risks that Arise from AI Systems? AI is a tool and does not exist in a legal vacuum. Policymakers should be mindful that activities performed and decisions aided by AI are often already accountable under existing laws. Where new public policy considerations arise, governments should consider maintaining a sector-specific approach while removing or modifying those regulations that act as a barrier to AI’s development, deployment, and use. In addition, governments should avoid creating a patchwork of AI policies at the subnational level and should coordinate across governments to advance sound and interoperable practices. It’s also important to highlight that there is a market incentive for companies to address associated risks with the use of artificial intelligence. Companies to begin with are very risk-averse when it comes to potential legal liabilities associated with their use of the technology. This is why we applaud NIST’s development of “Playbook,” which is “ designed to inform AI actors and make the AI RMF more usable.”86 We believe the playbook will provide a great resource for the business community and industry in helping them evaluate risk. Every sector will have different risks associated with the use of AI, which is why it is important to maintain a sector-specific approach. However, we believe it’s important for policy makers to do necessary oversight to close current legal gaps. For this reason, we would ask policy makers to do necessary oversight of the American COMPETE Act, which requires the U.S. Department of Commerce and Federal Trade Commission ('FTC') to look at different emerging technologies and to conduct a thorough analysis of current standards, guidelines, and policies regarding AI that are implemented by each government agency, as well as industry-based bodies. This important assessment would provide lawmakers and industry with a comprehensive and baseline understanding of relevant regulations that are already in place.

86

https://www.nist.gov/itl/ai-risk-management-framework/nist-ai-rmf-playbook-faqs.

Trustworthy AI: Managing the Risks of Artificial Intelligence

119

How Should the United States Encourage More Organizations to Think Critically about Risks that Arise from AI Systems, Including by Priortiziing Trustworthy AI from the Earliest Stages of Development of New Systems? The United States has a great opportunity through the development of the NIST AI RMF to provide organizations with a key set of documents that would assist in their ability to think critically about risk. The adaptable voluntary framework would assist companies from big to small in assessing the risk with which they are comfortable and provide guidance on ways to help critical thinking through potential negative externalities which may be associated with its use. That being said, the framework can only assist if it is in the hands of those creating and developing and those who oversee its use. For this reason, we believe that NIST and the Department of Commerce should look at ways in which they can reach all different demographics and stakeholders to make them aware of these resources. Furthermore, we believe further effort should be made by the government to make connections to those small and medium size businesses that usually lack the time and resources to be looking for things like the RMF. What Recommendations Do You Have for how the Federal Government can Strengthen its Role for the Development and Responsible Deployment of Trustworthy AI Systems? The federal government has the ability to take a leading role in strengthening the development and deployment of artificial intelligence. We believe that the following recommendations should be acted on now. First, we would advise the federal government to conduct fundamental research in trustworthy AI : The federal government has played a significant role in building the foundation of emerging technologies through conducting fundamental research. AI is no different. A recent report that the U.S. Chamber Technology Center and the Deloitte AI Institute87 surveyed business leaders across the United States had 70% of respondents indicated support for government investment in fundamental AI research. The Chamber believes that the CHIPS and Science Act was a positive step in the necessary investment, as the legislation authorizes $9 Billion for the National Institutes of Standards Technology (NIST) for Research and Development and advancing standards for “industries of the future,” which includes artificial intelligence. Furthermore, wehave been a strong advocate for the National 87

https://www.uschamber.com/technology/investing-trustworthy-ai.

120

Committee on Science, Space, and Technology

Artificial Intelligence Initiative Act, which was led by Chairwoman Eddie Bernice Johnson and Ranking Member Lucas, which developed the office of the National AI Initiative Office (NAIIO) to coordinate the Federal government’s activities, including AI research, development, demonstration, and education and workforce development.88 We would strongly advise members to appropriate these efforts fully. Second, we encourage continued investment into Science, Technology, Engineering, and Math Education (STEM). The U.S. Chamber earlier this year polled the American public on their perception of artificial intelligence. The findings were clear; the more the public understands the technology, the more comfortable they become with its potential role in society. We see education as one of the keys to bolstering AI acceptance and enthusiasm as a lack of understanding of AI is the leading indicator for a push-back against AI adoption.89 The Chamber strongly supported the CHIPS and Science Act, which made many of these critical investments, including $200 million over five years to the National Science Foundation (NSF) for domestic workforce build-out to develop manufacture chips, and also $13 Billion to the National Science Foundation for AI Scholarship-for-service. However, the authorization within the legislation is just the start; we now ask Congress to appropriate the funding for these important investments.

88 89

https://www.ai.gov/naiio/. https://americaninnovators.com/wp-content/uploads/2022/01/CTEC-US-Outlook-on-AIDetailed-Analysis.pdf.

Trustworthy AI: Managing the Risks of Artificial Intelligence

121

Third, the government should prioritize improving access to government data and models: High-quality data is the lifeblood of developing new AI applications and tools, and poor data quality can heighten risks. Governments at all levels possess a significant amount of data that could be used to improve the training of AI systems and create novel applications. When C_TEC asked leading industry experts about the importance of government data, 61% of respondents agree that access to government data and models is important. For this reason, we would encourage policymakers to build upon the success of the OPEN Government Data Act by providing further additional funding and oversight to allow for expanding the scope of the law to include non-sensitive government models as well as datasets at the state and local levels. Fourth, Increase widespread access to shared computing resources: In addition to high-quality data, the development of AI applications requires significant computing capacity. However, many small startups and academic institutions lack sufficient computing resources, which in turn prevents many stakeholders from fully accessing AI's potential. When we asked stakeholders within the business community about the importance of shared computing capacity, 42% of respondents supported encouraging shared computing resources to develop and train new AI models. Congress took a critical first step by enacting the National AI Research Resource Task Force Act of 2020. Now, the National Science Foundation and the White House's Office of Science and Technology Policy should fully implement the law and expeditiously develop a roadmap to unlock AI innovation across all stakeholders. Fifth, Enable open source tools and frameworks : Ensuring the development of trustworthy AI will require significant collaboration between government, industry, academia, and other relevant stakeholders. One key method to facilitate collaboration is through encouraging the use of open source tools and frameworks to share best practices and approaches to trustworthy AI. An example of how this works in practice is the National Institute of Standards and Technology's (NIST) AI Risk Management Framework (RMF), which is intended to be a consensus-driven, cross-sector, and voluntary framework, akin to NIST's existing Cybersecurity Framework, whereby stakeholders can leverage as a best practice to mitigate risks posed by AI applications. Policymakers should recognize the importance of these types of approaches and continue to support their development and implementation.

122

Committee on Science, Space, and Technology

Conclusion AI leadership is essential to global economic leadership in the 21st century. According to one study, AI will have a $13 trillion impact on the global economy by 2030.90 Through the right policies, the federal government can play a critical role in incentivizing the adoption of trustworthy AI applications. The United States has an enormous opportunity to transform the economy and society in positive ways through leading in AI innovation as other economies contemplate their approach to trustworthy AI forward on how U.S. policymakers can pursue a wide range of options to advance trustworthy AI domestically and empower the United States to maintain global competitiveness in this critical technology sector. The United States must be the global leader for AI trustworthiness for the technology to develop in a manner that is balanced and takes into account basic values and ethics. The United States can only be a global leader if the administration and Congress work together on a bipartisan basis. We are in a race we can’t afford to lose. Jordan Crenshaw serves as Vice President and leads the day-to-day operations at the U.S. Chamber of Commerce’s Technology Engagement Center. Crenshaw also directly manages the Chamber’s privacy working group which is comprised of nearly 300 companies and trade associations, which developed model privacy legislation and principles. Prior to becoming vice president of C_TEC, he led the Chamber’s Telecommunications and ECommerce Policy Committee, which analyzes federal privacy, cloud computing, broadband, internet, e-commerce, and broadcast policies that impact U.S. businesses. Before joining the Chamber, Crenshaw served as an attorney focusing on environmental issues and analysis of consumer privacy laws. Crenshaw also worked at McGuireWoods, LLP assisting discovery issues for environmental nuisance, TCPA, and other civil litigation. Crenshaw also served Virginia Senate leadership, the Office of the Attorney General of Virginia, the U.S. Department of Labor Office of Administrative Law Judges, and the National Right to Work Defense Foundation. Crenshaw earned both his undergraduate degree and Juris Doctor from the College of William and Mary. He is licensed to practice law in Virginia and is a Certified Information Privacy Professional (CIPP/US). He and his wife Molly live in Virginia. 90

https://www.mckinsey.com/featured-insights/artificial-intelligence/notes-from-the-aifrontier-modeling-the-impact-of-ai-on-the-world-economy.

Trustworthy AI: Managing the Risks of Artificial Intelligence

123

Chairwoman STEVENS. Thank you. With that, Ms. Singh, yes.

Testimony of Ms. Navrina Singh, Founder and Chief Executive Officer, Credo AI Ms. SINGH. Madam Chair, Ranking Member Feenstra and Lucas, and Members of the Subcommittee, thank you for the opportunity to testify today and to be part of this distinguished panel of witnesses. My name is Navrina Singh. I’m the Founder and CEO of Credo AI, a venture-backed startup. In addition, I’m a member of the National AI Advisory Committee that is advising President Biden as part of the National AI Initiative. Trustworthy artificial intelligence is a topic that is deeply personal to me. Growing up in India as a girl who aspired to be an engineer, I learned early on that I faced an uphill battle for no reason other than my gender. Part of my passion for the subject and the main reason I founded Credo AI in March 2020 is because I experienced firsthand what is at stake. While AI is an exciting and ultimately very useful technology, unless we create a culture of accountability, transparency, and governance around it, we risk unchecked growth and algorithms that may unintentionally encode the same types of societal ceilings and perceptions that I experienced as a girl in India and that many others still experience today. Members of the Subcommittee know very well the power and potential of AI when used responsibly. While it is a transformational technology that is evolving rapidly, I realize that there are different points of view on its perceived advantages. But one thing we can all agree on is AI is not going away, which is why we owe it to ourselves and to the world that our children will inherit to ensure robust compliance and governance structures to keep pace with the AI development. As the Subcommittee studies the question of how to manage AI risk and build trustworthy AI, we think three key considerations merit special attention. First, I want to focus on full AI lifecycle, from design to development, to testing and validation, to production and use. That means building AI systems responsibly continuously. It is fit for purpose, fair, transparent, safe and secure, privacy-preserving, and auditable. Second, context is paramount. We believe that achieving trust-worthy AI depends on shared understanding, that governance and oversight of AI is

124

Committee on Science, Space, and Technology

industry-specific, application-specific, model-specific, and data-specific to ensure that it is fit for purpose. This necessitates a collaborative approach to metric alignment, and associated assessments. Third, transparency reporting and system assessments are critical for responsible AI governance. Reporting requirements that promote and incentivize public disclosure of AI system behaviors act as a key driver for establishment of standards and benchmark. And fundamental to this is access to compliant and comprehensive data for assessments. For these reasons, we at Credo AI advocate for context base, full AI lifecycle governance of AI systems with reporting requirements that are specific, regular, and transparent. If you truly want to be a global leader in AI, then our focus should be on building responsible technology aligned with our societal values. Responsible AI is also a competitive advantage. It allows companies to deploy AI at scale with confidence, and this transparency promotes trust with consumers in this technology. Government has a critical role to play here, working together through public-private partnerships to ensure the right set of standards exist to further innovation in the space. And we urge the policymakers and standardsetting bodies to prioritize establishing context-focused standards and benchmarks that are globally interoperable and can help eliminate some of the guesswork. My 8-year-old daughter told me recently that she wants to be an inventor and a social media influencer when she grows up. While I’m grateful that in this country my daughter will have the opportunity to follow her dreams, we owe it to her and the generations that will follow to ensure that we build AI which is developed responsibly and ethically. Thank you for the opportunity to appear before you, and I look forward to your questions. [The prepared statement of Ms. Singh follows:]

Prepared Testimony of Navrina Singh, Founder and CEO, Credo AI, before the House Committee on Science, Space and Technology, Subcommittee on Research and Technology HEARING DATE/TIME: SEPTEMBER 29, 2022 10:30 A.M. EST HEARING TITLE: TRUSTWORTHY AI: MANAGING THE RISKS OF ARTIFICIAL INTELLIGENCE

Trustworthy AI: Managing the Risks of Artificial Intelligence

125

Introduction Madam Chair, Ranking Member Feenstra, and Members of the Subcommittee on Research and Technology, thank you for the opportunity to testify today and to be a part of this distinguished panel of witnesses. My name is Navrina Singh, and I am the Founder and Chief Executive Officer of Credo AI. A “credo” is a statement or system of beliefs or principles to guide actions. I founded Credo AI in March 2020 with one key goal in mind: to enable organizations to deliver Responsible AI (RAI) at scale. RAI includes assessment, reporting, and governance to the highest of ethical standards in order to ensure a fair, transparent, compliant, and auditable environment for the development and use of artificial intelligence (AI). Credo AI is a software company, and our core product is the Credo AI Responsible AI Governance Platform™. Our platform is designed to help organizations consistently translate principles into actionable metrics, assessments and benchmarks throughout the entire AI lifecycle. We recognize that enterprises are at different levels of maturity in their development and use of AI. Our mission at Credo AI is to realize comprehensive Responsible AI governance by providing software tools to enterprises wherever they are in their AI Governance journey.

What Is Responsible AI? At Credo AI, we define the phrase “Responsible AI” as AI that is human centered. That means that AI systems need to be performant, fair, transparent, safe and secure, privacy-preserving, and auditable. These tenets are aligned with the ways that many other organizations, regulatory bodies, and standardsetting bodies define the phrase “Responsible AI.” Our customers aren't only concerned with making sure their systems are accurate or performant; they want to know if their systems are fair, transparent, and robust, and they want to know how regulators are defining these parameters. Measuring and managing each of these tenets is very complex and context-dependent—each must be aligned, based on the context of their use, with the values both of society and the organization developing or using the AI. Credo AI’s RAI Governance Platform is built to help organizations map, measure, manage, and mitigate AI risk and compliance for all their AI use cases. The Platform provides organizations with context-driven requirements for their AI use cases in the form of “Policy Packs.” Policy Packs provide

126

Committee on Science, Space, and Technology

specific technical and process requirements that an AI system must meet, based on regulations, laws, standards, frameworks, guidelines, an organization’s internal guardrails, and industry best practices. Our platform connects to our open source Responsible AI assessment framework, Credo AI Lens, which can be used to assess machine learning (ML) models and datasets based on the requirements coming from Policy Packs, allowing our customers to programmatically generate governance artifacts like model cards, assessments reports, transparency reports or audit reports. The platform standardizes governance activities, promotes multidisciplinary collaboration among technical and business stakeholders, and reduces the burden of governance on technical teams, making it easier for organizations to govern their AI systems more effectively and gain confidence in their AI use.

How to Create an Environment that Fosters RAI AI is a transformative technology that is rapidly evolving. There is a significant opportunity to encourage the development of trustworthy technology and set effective policy, and as the Subcommittee studies this important issue, Credo AI respectfully offers the following key points for your consideration: 



RAI Requires a Full Lifecycle Approach: At Credo AI, we believe managing risk is only one part of delivering on the promise of Responsible AI—it demands a full lifecycle approach. AI systems cannot be considered "responsible” based on one point-in-time snapshot, but instead must be continuously evaluated for responsibility, and transparently reported on throughout the entire AI lifecycle, from the design, to development, to testing and validation, to production and use. There Is No One-Size-Fits-All Approach to AI Governance: We believe that achieving trustworthy AI depends on a shared understanding that AI is industry specific, application specific, data specific and context driven. There is no one-size-fits-all approach to “what good looks like” for most AI use cases. For example: there is no single definition of algorithmic “fairness,” because the concept of fairness is incredibly context-dependent. Similarly, when considering what metric or measures to use for the performance of an AI system,

Trustworthy AI: Managing the Risks of Artificial Intelligence



127

assessors should be able to select from a wide variety of different metrics that take into account use case context, model type, and data type. The organization building the AI system should be consulted about “acceptable” performance metrics. This requires a collaborative approach to assessments, and we advocate for context-based tests for AI systems with reporting requirements that are: specific, regular, and transparent. Transparency Reporting and System Assessments Can Deliver Trustworthy and Accurate AI: The importance of transparency reporting and system assessments cannot be overstated as a critical foundation for RAI governance for all organizations. Reporting allows policymakers to start to evaluate different approaches, and potentially opens the door for benchmarking—reporting is the step that gets us to standards that can be enforced. We have seen firsthand how comprehensive and accurate assessments of the AI applications and the associated models/datasets, coupled with transparency and disclosure reporting, encourage responsible practices to be cultivated, engineered, and managed throughout the AI development life cycle. Fundamental to this is access to compliant and comprehensive data for assessments.

Companies Are Seeking Guidance In our experience, organizations understand that Responsible AI is a competitive advantage for them in this age of AI. Organizations know there is a need for RAI governance, and welcome a collaborative approach to developing it. The notion that AI regulation will cause U.S. companies to offshore or cause AI to stagnate is a false premise. Based on our experience in the field working with companies that develop and deploy AI, we repeatedly hear a desire to have those systems work well in a compliant, safe, fair and auditable fashion. This leads to an important synergy: the more that policymakers can do to help companies understand how to develop trustworthy systems, the easier it will be for those companies to maximize the value of those systems. Thoughtful policy making and governance via publicprivate partnership can create conditions for innovations in AI for these companies.

128

Committee on Science, Space, and Technology

Key Challenges to Overcome in the Development and Use of Responsible AI While there is reason for optimism, there is much work to be done. Credo AI has experience working with customers across industries, and we have observed that they are all working to set up processes to foster RAI. The key challenges that we have observed and that we hope policymakers will consider when it comes to more effectively promoting the responsible development and use of AI include: 



Standards and benchmarks for RAI are still emerging. We urge policymakers and standard-setting bodies to prioritize establishing context-focused standards and benchmarks—that are globally interoperable—that can help take some of the guesswork out of compliance with AI regulations. While many emerging regulations set “fairness,” “transparency,” and other RAI dimensions as key requirements for compliance, there are not yet clear standards or benchmarks for what it means for an AI system to be “fair” or “transparent.” That is because there are many ways to define these terms. Without clear standards and benchmarks, organizations are left having to develop and justify their own measures for different technical dimensions of their AI systems. Standards and benchmarks should also try to account for the challenges of operationalizing such requirements and frameworks depending on the size and reach of the organization. Expecting a small or mid-sized business to operationalize new standards as quickly as major multinational companies would present its own challenges. AI regulations must include reporting requirements to foster transparency and drive towards standards. We urge policy makers to establish requirements that mandate disclosures and transparency reporting around the procurement, development, and use of AI. Because of the lack of standards today, many organizations are reluctant to share results about the behavior of their AI systems externally—because they have no idea how their results might compare with those of their competitors, or whether they are “good” or “bad” for external stakeholders. We are strong supporters of reporting requirements, therefore, that promote and incentivize public

Trustworthy AI: Managing the Risks of Artificial Intelligence

129

disclosure of AI system behavior and operation as a key driver of the establishment of standards and benchmarks.

Context Is Critical: Metrics for Each Tenant of RAI Vary We strongly believe that AI is industry-specific, application-specific, and context-driven and needs to continuously assessed —factors that should be reflected in its governance. For example, when considering what definition to use for fairness, we feel that there is no one-size-fits-all answer or approach. There is not a single definition of algorithmic fairness accepted across industry sectors and use cases. Algorithmic fairness is a field of research aimed at understanding and correcting the ways that historical societal biases show up in AI systems. An AI system can be considered to be “fair,” in the sense of algorithmic fairness, if it does not perpetuate or amplify harmful societal biases in its operation. When data scientists are evaluating whether their AI systems are fair, they look at specific technical measures of bias in their AI systems—to understand if these systems are perpetuating harmful societal biases. There are two primary ways that we measure bias in our AI systems: evaluating parity of performance and parity of outcomes. 



Parity of performance is about evaluating whether your ML model performs equally well for all different groups that interact with it. For example, does your facial recognition system detect Black women’s faces at the same or similar accuracy rate that it detects white men’s faces? Parity of outcomes is about evaluating whether your ML model confers a benefit to different groups at the same rate. For example, does your candidate ranking system recommend Black women get hired at the same or similar rate as it recommends white men?

We do not have a singular definition of fairness — nor should anyone who is thinking about algorithmic fairness — because fairness is incredibly context-dependent. Here’s an example to illustrate why you cannot have a “one size fits all” definition of algorithmic fairness. Let’s say that you have an AI system that is

130

Committee on Science, Space, and Technology

going to be predicting whether someone should be given a loan (a credit risk prediction system), and you have another AI system that is going to predict whether somebody has cancer by analyzing a CT scan for tumors. For your credit risk prediction system, the system is considered “unfair” if it predicts that Black women are credit-worthy (and therefore should be given a loan) at a much lower rate than white men; we want to make sure that our credit prediction system is conferring the benefit of getting a loan relatively equally across groups, regardless of gender or race. This is an example of parity of outcomes. For the cancer detection system, however, the parity of outcomes isn’t the primary concern; we don’t care if the system is predicting that women have breast cancer at a rate that is significantly higher than men. This is because for this cancer detection system to be considered “fair,” we want to make sure that it is equally accurate for all groups that interact with it. The issue here is parity of performance: our cancer detection system will be considered fair if it has the same performance rate across all groups. The metrics that you use to measure parity of performance are different from the metrics that you use to measure parity of outcomes—and even within these two categories, there are many different metrics that you can pick, depending on what is most important based on the use case context. Similarly, when considering the question of what metric or measure to use for algorithmic performance: there is no single metric for performance. Depending on your use case context, model type, and data type, you may select from a wide variety of different metrics that are all reasonable and accepted ways to evaluate performance of an AI/ML system. For a cancer detection system, assessors might care more about a system that has relatively equal false negative rates across groups, because incorrectly diagnosing someone as healthy who actually has cancer is a life-threatening mistake (the cost of making an incorrect “negative” prediction is very high). For a facial recognition system that is going to be used to grant access to a device—say, your phone—assessors may care more about false positive rate, however, because they want to ensure that this system doesn’t accidentally grant access to your phone to someone who should not have access (the cost of making an incorrect “positive” prediction is very high). These examples are all intended to show that there is no one definition for fairness when it comes to AI systems, and context is a key factor in determining what is fair. At Credo AI, we provide tools to our customers to help them determine how fair their AI system is by working with our customers to align on the exact metrics that should be used to assess fairness based on their use case context. This work is informed by the industry best

Trustworthy AI: Managing the Risks of Artificial Intelligence

131

practices that the customer’s use case is aligned with. Our policy team also focuses on bringing in requirements from regulations, laws, standards, guidelines, and frameworks—and our data science team partners with our customers to understand exactly what their ML models are designed to do, and how they do it; we then create a technical assessment plan designed to evaluate the exact dimensions of the system that are most relevant for understanding whether it is fair in the context it will be deployed. Given the context-driven nature of AI governance, we advise policymakers to develop context-specific guidance and rules, and transparency reporting will help industry to arrive at the right standards and rules based on this context - through an iterative process of revealing benchmarks and best practices.

Addressing Risk Now Ensures Leadership in the Long Run AI is a multi-trillion dollar industry. For the United States to lead in its development, it is crucial to understand the economic and societal outcomes for our nation. RAI is about better, more effective outcomes—it is about producing more value for AI builders and consumers by building trust in technology. RAI is a core competitive differentiator, not just for companies, but for countries. Any government helping to set up RAI requirements on testing and metrics now will have a competitive advantage in first creating and developing accurate methods for assessment and alignment to create trustworthy AI. The work to build trustworthy AI is not just about “doing the right thing” and setting “values” that make people feel good. It is about building systems that work better - systems that do not have unintended harmful consequences. The MIT Sloan Management Review and Boston Consulting Group Report91 published this month (September 2022) reported that, “RAI Leaders can realize measurable business benefits from their RAI efforts…[which] include better products and services, improved brand differentiation, accelerated innovation, enhanced recruiting and retention, increased customer loyalty, and improved long-term profitability, as well as a better sense of preparedness for emerging regulations. RAI leaders are nearly three times as

91

Elizabeth M. Renieris, David Kiron, and Steven Mills, “To Be a Responsible AI Leader, Focus on Being Responsible,” MIT Sloan Management Review and Boston Consulting Group, September 2022.

132

Committee on Science, Space, and Technology

likely to realize business benefits from their organizations’ RAI initiatives than non-RAI leaders.” This is just one illustration of how investing in trustworthy AI pays off. When we consider what the effect of algorithmic bias can be on economic contributions to society, we should look at real-world examples, such as an AI system that was deployed in the market and automatically granted lower lines of credit to women than to men. The biased AI system allocated differential lines of credit to a husband and wife with the same address and joint home income, with the woman being granted a much lower line of credit by the AI system than the man. In this scenario, there is no reason a wife should have less credit than her husband, and the system’s decreased accuracy resulted in a loss of economic contributions that women bring to society - an outcome that is unfairly impacts the individuals and is also bad for the business Another example of algorithmic bias illustrates a loss to the workforce— an AI system that was trained mainly on men’s resumes deprioritized the word “Women’s” when it appeared in resume search results. As a result, the AI system missed out on a stellar talent. This example is not just about hurting people, but about creating a system with failures that will have a negative economic impact on the workforce at large. If we truly want to be a global leader in AI, then our focus should not be on building the most powerful system the fastest, but rather on building responsible technology and support systems that will serve us best in the long run. We will sacrifice the opportunity to lead if we are simply moving quickly for the sake of getting ahead in a way that is not aligned with our societal values.

Conclusion Credo AI is grateful for the opportunity to appear at today’s hearing, and we applaud the Subcommittee’s focus on how best to empower organizations to create AI with the highest ethical standards in order to deliver Responsible AI at scale.

Trustworthy AI: Managing the Risks of Artificial Intelligence

133

NAVRINA SINGH CEO & Founder, Credo AI https://www.credo.ai/ PROFESSIONAL EXPERIENCE Past: Microsoft & Qualcomm Committee Member: NAIAC (National AI Advisory Committee Current Board Member: Mozilla Social Media Twitter: navrinasingh Linkedin: https://www.linkedin.com/in/navrina NAVRINA SINGH is a seasoned customer centric, data driven, global technology and product leader with a proven track record of operationalizing strategy, driving innovation, and commercializing growth. Ms. Singh is the CEO and Co-Founder of Credo AI. On a mission to empower organizations to deliver trustworthy Artificial Intelligence (AI) at Scale, Credo AI helps organizations to monitor, measure and manage AI introduced risks. Prior to that, Ms. Singh has led multimillion products and businesses in Enterprise SaaS, Artificial Intelligence (AI) and Mobile over the past 20+ years. Navrina is passionate about responsible leadership and inclusive cultures which she

134

Committee on Science, Space, and Technology

believes are foundational to delivering meaningful impact and transformational innovation to the organization and its people. Prior to her current startup, Ms. Singh was Director/Principal of Product in Microsoft Cloud & AI (2017-2019), where she built Natural language based conversational AI products (chatbots, Virtual agents). In addition to Product, she led the monetization business model to deliver enterprise value, operationalize conversational AI platform technology. Navrina joined Microsoft in 2016 as the Director Business Development for Artificial Intelligence responsible for commercial strategy and partnerships to forge new businesses for Microsoft leveraging AI technologies. Before joining Microsoft, Ms. Singh spent over a decade at Qualcomm Incorporated (2004-2016), where she held multiple roles across product management, strategy, and engineering. From 2011-2015, Ms. Singh was the head of Qualcomm Innovation focused on building new products and creating new market opportunities in Artificial Intelligence, Internet of Things and Mobile across its emerging businesses. As an outspoken voice for Inclusion and Diversity, Navrina founded and led the company’s first women initiative focused on getting more women to leadership roles in technology, equal pay initiatives, transparency across hiring diverse talent etc. Ms. Singh is a Young Global Leader with World Economic Forum (WEF), for her work in disruptive technologies and driving diversity & inclusion initiatives at Scale. Ms. Singh was also a member of the WEF Global Future Council on AI and Robotics, exploring how developments in these fields could impact industry, governments, and society in the future. Ms. Singh is currently also working on global strategic initiatives related to the ethical and responsible development and deployment of AI. Currently Ms. Singh serves as a member of the National AI Advisory Committee (NAIAC), which is tasked with advising the President and the National AI Initiative Office on topics related to the National AI Initiative. This Advisory Committee was launched in April 2022. Ms Singh is also the executive board member of Mozilla Foundation and serves on its audit and Trustworthy AI committee, focused on driving its mission of Open Internet via trustworthy Artificial Intelligence. In the past Ms. Singh has served on the board of the University of Wisconsin-Madison College of Electrical Engineering and on the board of Stella Labs (Hera-Labs), a San Diego based women's accelerator. Ms. Singh is a startup advisor and a technology leader published on Fortune, Business Insider, TechCrunch, Forbes, and others.

Trustworthy AI: Managing the Risks of Artificial Intelligence

135

Ms. Singh holds a MS in Electrical & Computer Engineering from the University of Wisconsin-Madison, an MBA from the University of Southern California and a BS in Electronics & Telecommunications from Pune College of Engineering, India. *** Chairwoman STEVENS. Well, thank you. And at this point, we’re going to turn to our first round of questions, and the Chair is going to recognize herself for 5 minutes. In hearing your testimony, as I reflect on my time pursuing a master’s in philosophy of which my parents never understood why I got, but we were asking the ethical question about artificial intelligence that some ask in the theoretical space that can a AI replace human behavior? Can—does AI threaten what we do as people seeking to overtake, you know, the decisions that we make as people? Today’s hearing is a little bit more instructive to the theoretical question. Today’s hearing is saying, hey, we have artificial intelligence, and it is being utilized, but how is it being utilized? How is it being implemented? And is it implementing fairly and accurately for the best outcomes for society and for humanity? So in 2019, NIST developed the strategy for Federal engagement in developing technical standards and tools for artificial intelligence. And, Ms. Tabassi, I’m just wondering if you could touch briefly because your testimony got me thinking on this, what was included in this strategy and why it is important that we have strategies for engaging in the development of technical standards for artificial intelligence. Has NIST’s work on AI management framework revealed new or underdeveloped areas for standardization with regard to trustworthy AI systems? And then, because we want to hear from you on that, but then I want to hear from, I guess, Crenshaw, Mr. Crenshaw, about the—you know, how beneficial it is to industry actors for the Federal Government to lay out priorities and standards for critical technologies and artificial intelligence. Are you using these? But let’s start with you, Ms. Tabassi. Ms. TABASSI. Thank you very much for the question, Chairwoman. Yes, in 2019, we developed a plan for Federal Government engagement in development of technical standards, and it has several recommendations on bolstering research that’s really important for development of good, technically solid, scientifically valid standards, but also importance of public-

136

Committee on Science, Space, and Technology

private partnership and coordination across the government on bolstering our engagements in the standard development and importance of international cooperations on development of standards that are technically sound and correct but also reflect our shared democratic values. Let me also say that it also lists standards that are related and needed for a trustworthy, responsible AI and of course, many of the standards that’s happening for information technology and software systems can be related to artificial intelligence and can be used there but also need for other standards for addressing issues such as bias and explainability and trustworthy. Chairwoman STEVENS. Great. And, Mr. Crenshaw, I mean, are you using these or, I mean, is this helpful to what you were talking about? Mr. CRENSHAW. The NIST process is incredibly helpful. It is getting the conversation started and providing the guidance that’s necessary for industry to look to. It’s incredibly important, too, to have buy-in from the affected stakeholder community. And I have to applaud NIST for the work that they have done through their multiple rounds of comment, their multiple rounds of public engagement and public meetings to really get this right. And I think it’s incredibly important, the work they are doing, that there is a set of guidelines for industry to look to. I think, you know, on the domestic level, that that is a guiding light for industry. I would note, it’s also important to remember standards bodies internationally as well. In order for us to maintain our leadership in this front, we need to make sure that we have American interests represented with American businesses and American policymakers being aware of that. We do know that our competitors are trying to pack those bodies, and we want to make sure that we are represented as well. I think yesterday—— Chairwoman STEVENS. So are you suggesting more investment? Mr. CRENSHAW. I’m suggesting more participation, so—— Chairwoman STEVENS. Well, we did just reauthorize NIST, but, you know, Dr. Isbell, what I was kind of getting at was the Turing test, which I know you’re familiar with. But I don’t know if that’s really the question now, is it, you know, in terms of improving these outcomes with AI? And maybe this is too philosophical of a question, but is it the Turing test that that we should be focused on or what is the question that we should be focused on with the fair implementation of AI across a multitude of sectors that are determining our economy at grand scale with 5 seconds left? Dr. ISBELL. There is no question too philosophical. The short answer is, it’s not the Turing test. It’s about the actual impact and outcomes on real

Trustworthy AI: Managing the Risks of Artificial Intelligence

137

people. And you have to bring those real people in to understand those outcomes. Chairwoman STEVENS. And with that, I’m going to now recognize Mr. Feenstra, our Ranking Member, for 5 minutes. Mr. FEENSTRA. Thank you, Chairman Stevens—Chairwoman Stevens, and thank you for those questions. Thank you again for all witnesses. I really enjoyed your testimonies. You know, there’s extensive research going on in my home State and my universities and—concerning AI, how it’s being applied now and into the future. Iowa State’s AI Institute for Resilient Agriculture is bringing together experts to lay the groundwork for developing AI-driven predictive plant models to increase the resiliency of agriculture. Researchers at the University of Northern Iowa are aiming to use AI to improve healthcare outcomes, increase privacy, online security, and create predictive maintenance systems for our products. And then in the University of Iowa, they’re utilizing AI to improve the effectiveness of cancer screenings, as well as the work to identify and address biases in AI and healthcare models. You know, these are just a few examples that are out there, and they’re limitless. And I would just like to say, Dr. Isbell, I’m an academic also, and I teach—or did teach consumer behavior. And when you start looking at consumer behavior, there’s a tremendous amount of AI being used, good and bad. Ms. Tabassi, I understand that AI won’t be replacing doctors, all right? I understand that, won’t be replacing nurses. But we also have the opportunity to learn about healthcare-related AI and research, as I just mentioned. Fostering trust in AI will be critical to utilizing applications such as these in the healthcare sector. And this is just one example. My question to you, if I can flip my page, can you explain how an AI Risk Management Framework will—broadly applied across the different sectors and industries to minimize the negative impacts of AI systems and maximize positive outcomes? You can use any specific sector examples in healthcare if you wish, but I’d like to know more about that. Ms. TABASSI. Thank you so very much for the question, Ranking Member Feenstra. And all of the examples that you said just show the potential of AI to really change our lives for better. I’m going to use the last example that you brought up, the cancer screening. So if you have a cancer screening tool, first, as mentioned several times, we wanted to make sure that it’s accurate, it’s working well, but beyond that the accuracy should also be balanced with associated risks and impact that it can have. So the question

138

Committee on Science, Space, and Technology

comes up about the bias or fairness. Does it advantage or disadvantage certain demographics? Beyond that there’s questions about the vulnerability and security and resilience of the AI model, we all hear that AI systems are brittle. Can that cause negative consequences? The issue of the privacy, the data that’s used to train the models, can we make sure that the privacy is preserved and the training data are not inferred from the models? And then on top of that is we heard about the explainability also. If the tool comes out and gives, for example, an outcome or prediction that there is a cancer there, that’s a very serious message to be carried to the doctor to the patient. So explainability on how the model decides that there’s a cancer there, and another level of complexity, the explanation needed for physician versus technician versus patient is different. AI RMF is trying to provide a shared lexicon, interoperable way to address all of these questions, but also provide a measurable process, metrics and methodology to measure them and manage these risks. Mr. FEENSTRA. Thank you so much for that. That’s great information. Mr. Crenshaw, in your testimony you say that trust is a partnership? I 100 percent agree. And only when government and industry work side by side can trust be built. How did NIST work with industry in developing the AI Risk Management Framework? And how is having a tool like the framework going to strengthen consumer confidence when it comes to building trust in the AI systems? Mr. CRENSHAW. Well, I think as I said, Congressman, trust is essential. And I think NIST has done a great job of really instilling trust in their work with the business community by being open and transparent. If you look at the the comment record, it’s comments from across the board, everyone from civil society all the way to industry and developers. And they’re really looking to develop a robust record. That I believe is a really great example for other agencies as they’re looking at tackling this issue to look at. So they’ve had multiple stakeholder sessions. They’ve come in and actually spoken with our members and tried to get a good feel for where they’re at. And it really—the partnership has been excellent, and I think it’s a great example for other agencies moving forward in this space. Mr. FEENSTRA. Thank you, Mr. Crenshaw, I have questions for Dr. Isbell and Ms. Singh, but I ran out of time. So with that, thank you for your testimony. I yield back. Chairwoman STEVENS. Great. And with that, we’re going to hear from Dr. Foster for 5 minutes of questioning. Mr. FOSTER. Thank you, Madam Chair.

Trustworthy AI: Managing the Risks of Artificial Intelligence

139

So my first general question is this discussion converging? You know, I’ve been chairing the Task Force on AI and Financial Services for the last several years, and it strikes me that the complexity of AI behavior is increasing much more rapidly than our ability to categorize and regulate it. You know, an example of that is a simple neural net classifier that’s operating on a static data set to calculate credit scores or something like that has a relatively—it’s an enormous, but it’s a relatively finite range of behaviors to categorize, OK? On the other hand, interactive AI, which is an agent which is learning from other intelligent agents and guiding its behavior, has an enormously larger space of behaviors to characterize. And I just don’t even see how you can possibly explain how an intelligent agent might react in any given circumstances. Like you can say general things like, you know, this child is a fast learner but makes a lot of mistakes, but that doesn’t give you the granularity of detail you need. And so I’m just wondering, since you’ve been all thinking about this, do you get the feeling that it is converging or not? No? Dr. Isbell? Dr. ISBELL. The short answer is no. The problems that we’re talking about are exponential. All of our solutions are linear. You might as well ask the question whether human behavior is converging and we know how to understand or regulate that. And of course, the answer is no, but that does not mean that there are not things that we can do to make progress. And I do think a lot of the discussions that we’ve had just in the last couple of years around fairness, accountability, thinking about how to educate people to be in the— to be a part of these discussions do make real progress, and that progress doesn’t—is very sudden, and makes very sudden changes, so it’s a good thing. Mr. FOSTER. Any other thoughts on this? Yes, Dr. Singh? Ms. SINGH. Congressman, I think that’s a great question. I believe we are making progress toward convergence. But one of the key areas that I spoke about earlier is how important context is to this work. So one of the core acts that we have as standards emerge in this space is really thinking about context, the applications, and how we can make progress toward the right metrics and assessments, along with the specific reporting requirements. And we are seeing globally as well as the great work that NIST is doing that there is a convergence that has started to happen in terms of having those contextual conversations. Mr. FOSTER. Any other thoughts? It’s a huge question. Let’s see— many of you have emphasized education and the need for an educated public. So if you had to choose between a public that knew statistics or knew calculus, which would you take? I’m a physicist, so I naturally lean toward calculus, but

140

Committee on Science, Space, and Technology

it seems like what I use every day as a politician, statistics are relevant. And probably for AI, I think you’re in the same bin. And do you have any—well, all right, Dr. Isbell—but you have to deal with curricula, so you’re on the seat again. Dr. ISBELL. I’m not speaking for all of my colleagues. I think the answer is, if I had to choose for most people, it would be statistics, but I’d also like them to know information theory and linear algebra. But fundamentally, it’s about problem solving around data mattering as opposed to just the algorithms and the processes that you go through. And with that you can solve a lot of the problems or at least address and think about the problems that are coming down the pike. Mr. FOSTER. Any other thoughts from any of you? What do you use every day, statistics or calculus? I think—yes, machine learning. It’s— backpropagation is the chain rule, and I don’t think there’s much other calculus anywhere in it. But anyway, the—now, actually, this was for Mr. Crenshaw. You’ve emphasized international competition, and it strikes me that a lot of the countries that are clobbering us, you can’t get out of high school without knowing calculus and probably statistics. There’s all sorts of people showing up at school boards, you know, unhappy that we’re not supporting their preferred theology or mythology. But very few school boards are being inundated by people, you know, demanding that our kids know statistics and calculus. What—is there some— is there work to be done there? Mr. CRENSHAW. There’s definitely work to be done on the education front. We need to prioritize STEM education to ensure that we have the fundamental knowledge base for students across the country to get into this field because we are going to need more coders and ethicists in this field who actually can assist with our leadership. The other thing I think would be important to note, too, is that we also need to make sure that we have talent in this country and retain talent and still attract talent. And one of the things that we found out through our AI Commission is that we, you know are going to lose the talent race if we don’t deal with our immigration issues in this country as well and make sure that we can retain talent after we’ve educated them here in the United States, make sure that we can keep our talent to ensure that we have people who know how to make ethical AI work. Mr. FOSTER. Thank you. And we in a bipartisan way on this Committee have been doing everything we can to try to drag that across the finish line. I think we came within one Senator of doing something significant in the CHIPS and Science.

Trustworthy AI: Managing the Risks of Artificial Intelligence

141

Anyway, my time’s up and will yield back. Chairwoman STEVENS. And with that, we will hear from the Ranking Member of the Full Committee who we’re so grateful is here, Mr. Lucas for 5 minutes of questioning. Mr. LUCAS. Thank you, Madam Chairman. Ms. Tabassi, in the AI Initiative that we passed in Congress last year, we gave NIST the difficult task of defining what makes AI safe and trustworthy. Can you walk us through the process of how NIST determined that definition of trustworthiness? And while you’re thinking about that, do you think this measure of trustworthiness also helps with the measuring of fairness in AI systems, please? Ms. TABASSI. Thank you so very much, Ranking Member Lucas, for the question. In terms of the process of developing a definition of the trustworthiness, I want to thank the kind of work that has been mentioned about the NIST process. But the process has been an open, transparent, collaborative process. There has been many definitions and proposals for definition for trustworthiness, so we ran a stakeholder-driven effort to converge to the extent possible on the definition of the trustworthiness. And that, as was mentioned, include rounds of workshops and public comment and a listening session. So that was the process. Your second part of the question is about the fairness. So fairness is one of the aspects of the trustworthiness as it’s mentioned in the AI RMF. And fairness, as it was mentioned, is a complicated concept because it can depend on societal values and can change from context to context. But that’s also part of one of the aspects of the trustworthiness mentioned in the AI RMF. Mr. LUCAS. Ms. Singh, in your testimony, you illustrate why you cannot have a one-size-fits-all definition of an algorithmic fairness. How does the AI Risk Management Framework exemplify this? Ms. SINGH. As I previously stated, I really commend NIST for the Risk Management Framework and how they’re thinking through not only mapping different applications, but measuring and then overall management of those. At Credo AI, we are really focused on operationalizing responsible AI tenets and ensuring that continuous oversight and governance is provided of these systems. And I think for us it is really critical that there are governance assets based on the context of AI application that gets generated that inspires that trust that Ms. Tabassi was just talking about. Mr. LUCAS. Mr. Crenshaw, do you foresee U.S. industry widely adopting and utilizing the Risk Management Framework since it’s a voluntary tool, or will it need to be incentivized? While you’re thinking about that, do you

142

Committee on Science, Space, and Technology

anticipate U.S. standard bodies will play a role in encouraging the utilization of the framework? Mr. CRENSHAW. I think there’s definitely a role there. I think they also have really gotten the conversation out about the need to develop standards. When it comes to the NIST Risk Management Framework, I think what we’ve seen of it is promising. Obviously, we’ll have to comment on the final product when it comes out. But I think it is a promising product. And, you know, I think, given the fact that we’ve had such robust stakeholder input, I do anticipate that, you know, given the direction things are going, we definitely could see stakeholder engagement to support the framework. And I think that’s a good thing because we need guidelines and standards to get behind so we can develop trust. Mr. LUCAS. Ms. Singh, do you have any thoughts on this point? Ms. SINGH. I think multistakeholder engagement is going to be critical in the process. And as—you know, we’ve been invited to give feedback on the NIST RMF, and we’ve done that actively over the past couple of months. As mentioned, I think there’s a little bit more work to be done in terms of ensuring that we are looking at different applications and context. Mr. LUCAS. Ms. Tabassi, any thoughts? Ms. TABASSI. In terms of the adoption, I think that the adoption and use of the AI RMF would be based on the value that it provides and also giving awareness that these things exist is also very important. I thank again the Committee and all of my panelists for the kind words about the process. And in terms of the context and specific use, agreed that a lot more work needs to be done. And we have a call for contribution particularly for that. Mr. LUCAS. One last question, and I come back to you, Ms. Tabassi. Why is it important for democratic nations to lead the development of international standards for trustworthy AI systems? Ms. TABASSI. I believe it’s important to affirm our shared democratic values of openness, protection of democracy and human rights, and design and develop technologies that operationalizes those values. And we need standards for technologies that are rights-affirming and show those values. Mr. LUCAS. Just the way I intend to answer questions about that in my town meeting someday. Thank you. Yield back, Madam Chair. Chairwoman STEVENS. With that, we are going to hear from the Congresswoman from North Carolina, Ms. Ross, for 5 minutes of questioning. Ms. ROSS. Thank you very much, Chairwoman Stevens and Ranking Member Feenstra. And thank you to the panelists for joining us today. On April 29th of last year [inaudible] represents a larger problem of cybersecurity

Trustworthy AI: Managing the Risks of Artificial Intelligence

143

and privacy issues in this country. AI innovation happens fast, and we need legislation that’s equipped to grow into this quickly expanding sector. For my constituents in the Research Triangle and for national security more broadly, we need to invest in long-term structural infrastructure that ensures better cybersecurity and privacy in our tech sector. We also need to look at how AI affects the arts and our creators, and we all have many of them in our district. So I look forward to hearing from our witnesses on how we can ensure that systems of machine learning can be created with consideration for individual privacy, corporate privacy, intellectual property, and national security. But since none of the folks who have asked questions yet have talked about intellectual property, and I serve on the Judiciary Subcommittee on that, I’m going to ask Ms. Tabassi—I’m sorry if I mispronounced your name—to say I want to thank you for your important work on the draft of the Artificial Intelligence Risk Management Framework. But I also want to talk a little bit about intellectual property because the United States takes our intellectual property protections very seriously. And without those protections, there’s a significant threat to American creativity, ingenuity, jobs, and our economy. And AI offers opportunities to artists and creators to enhance the creation process in many ways, but that also presents risks. And there are services and sites available today that use art, books, music, and other American-made works as inputs to train AI. Based on what is happening with image-generating AI currently on the web, we can already see that artists will have to compete with AI creations in their own style and trained on their own content when they were either— neither consulted nor compensated for this. And as a matter of fact, there was a recent article that I just read about that. Is this issue on NIST’s radar screen, and what can we do about it? Ms. TABASSI. Thank you so very much for the question, Congresswoman. And we have actually received comments to that effect to AI RMF. And that’s a serious problem, certainly something that would be part of the discussions in the future drafts of the RMF. A lot of work needs to be done, and that would definitely be part of the discussion. Thank you. Ms. ROSS. OK. I do have a couple of other questions. Dr. Isbell, your written testimony talks about the Marshall Project and the use of risk assessment in the criminal justice system. How can transparency increase the ability of individuals to protect their information and avoid undue scrutiny? And to whom should individuals direct their concerns if they believe that their data has been misused?

144

Committee on Science, Space, and Technology

Dr. ISBELL. So it’s a very—it’s actually quite a difficult problem because the data that we have is out there everywhere, and we leave a trail everywhere that we go. Fundamentally, there has to be policy and there has to be infrastructure. This is a role that government has to provide a mechanism by which people can can deal with issues where their data had been misused. It is not a thing that will naturally come from industry. It is not a thing that naturally comes from the educational sector. It is something that has to be dealt with by the legal system. Ms. ROSS. And can you tell us about any law enforcement practices that we should be aware of as we’re considering changes to the legal system? Dr. ISBELL. Well, I think the short answer is you have to think very carefully about and look at the way that the systems that are out there are currently being used and how they’re currently being misused. And having done that, it takes you down a path toward understanding how you have to try to address those one at a time. It’s a pervasive thing that touches everything. I—we don’t have time to talk about this now, but you—earlier, someone made a comment that doctors will not be replaced by AI. Well, they’re already being replaced by AI, and they’re being done in an unregulated way that’s having an impact on people. And you have to be—you have to recognize that and you have to address it context by context and one case at a time. Ms. ROSS. Thank you, Madam Chairman, and I yield back. Chairwoman STEVENS. Great. And with that, we’re going to hear from Dr. Baird of Indiana for 5 minutes of questioning. Mr. BAIRD. Thank you, Madam Chair. And I appreciate you and Ranking Member Feenstra for holding this important hearing. And I really appreciate, I always do, the expertise of the witnesses and their ability to answer our questions and it’s very important and very specific. My first question goes to Dr. Isbell. And I want to know what role have universities played in the development of the AI Risk Management Framework? And more broadly, how are universities helping to shape the future of AI by engaging in public-private partnerships, Dr. Isbell? Dr. ISBELL. So the—higher education in general is—universities have participated by being invited in and being a part of the conversations. Individuals and organizations have continued to participate in all of these discussions around standards, including things that NIST has done, but also through operations of institutes that have been created, for example, by NSF. What the universities do, what our role is, is to do the basic research that exists to create the basic research, ask the basic questions, and then educate the students who are going to go forward and to do that work. A lot of the work

Trustworthy AI: Managing the Risks of Artificial Intelligence

145

that we do, a lot of where we play that role isn’t actually identifying the fundamental problems. That is sort of what academic freedom allows you to do, and that’s what we continue to do. The environment that we create is one that is—that allows us to ask these questions and to make them available for industry, to make them available for government to take the next step. That’s what we do. Mr. BAIRD. Well, thank you very much. Ms. Tabassi, to your knowledge, has the People’s Republic of China developed a similar tool to the AI Risk Management Framework? And what about any of our allies? And so what role if any has NIST played in sharing findings and the best practices with the international community, particularly our allies? So if you have any thoughts in that area, I would appreciate it. Ms. TABASSI. Thank you so very much for the question, Congressman. In terms of cooperation and collaboration with our allies, the stakeholder engagement effort that we run includes our international partners, so they have been involved in terms of providing input to the AI RMF, coming to our workshops and participating in those events, but we also interact with them and talk with them in forums such as Trade and Technology Council, QUAD, or OECD. So there is a good, strong, robust engagement going on that way. Mr. BAIRD. Thank you. Then my last question goes to Ms. Singh. So in creating the tools to help companies develop responsible AI, what are some of the most common concerns with AI systems that your company has seen? Ms. SINGH. Thank you so much for that question. You know, if responsibly and not built artificial intelligence is going to have very varying impacts on different use cases. So across the companies that we work with, one of the things that is critical is, again, really having a holistic view of from the time you’re designing the AI system to the actual use, making sure that you’re interrogating the technical systems, you’re interrogating the processes, as well as you’re interrogating the outputs. So this goes back to really identifying any unintended consequences that could appear in the entire AI lifecycle. Mr. BAIRD. Thank you very much. And I appreciate the witnesses’ responses. And with that, Madam Chair, I yield back. Mr. MCNERNEY [presiding]. Well, I was going to—I think I’m the next questioner, and I was going to thank the Chairwoman for this great hearing, but I certainly want to thank the panelists. Your testimony is great. What a great, incredible subject. I want to get right to questions though. Ms. Tabassi, how might standards and assessments be developed and— for explainability and interoperability.

146

Committee on Science, Space, and Technology

Ms. TABASSI. We do that the same way that we do for any type of other standards. With true stakeholder engagements and working with a whole community. Broad stakeholder engagement underlines everything we do at NIST and explainability, interoperability are difficult, complex topics. We do have some foundational research going on. Our researchers are working on this, but we also augment it with the work of the whole community. Mr. MCNERNEY. OK. Well, I’ve been on standards committees, and I know what kind of work goes on. So you’re saying it’s a similar process or would be a similar process? Ms. TABASSI. Correct. Part of it, doing the internal research, providing technical contributions, working with the whole community on strengthening the research and taking the contributions to the standard development organizations and hopefully see them through become international standards. Mr. MCNERNEY. Thank you. Dr. Isbell, in math and physics, systems and solutions are considered unstable if small changes in the initial conditions result in large changes in the solutions and outputs. Are AI systems unstable in terms of the data input? And, if so, how can that be mitigated? Dr. ISBELL. Some of them are. There’s a wide range of ways of doing AI and machine learning. Some of them are quite stable, and some of them are less stable. There’s a lot of theory behind this and a lot of work that’s been done over decades to get there. I think the most important thing actually is not the sort of instability that you’re talking about with small changes but that we don’t actually understand how the set of parameters that go into the way that we build these systems have that impact. It’s actually less about the data in that sense and more about the way that we build the systems in the first place. And that has remained largely unexplored. Mr. MCNERNEY. Well, thank you. That’d be a great area for research. Thank you. Ms. Tabassi, can you touch briefly on what’s included in the strategy of engaging technical standards for tools for artificial intelligence? Ms. TABASSI. Thank you for that question, Congressman. And, yes, happy to. So that strategy for working toward the standard was developed in 2019. And we are basically implementing the recommendations of that plan since it has been developed in 2019. What’s in the plan? Basically talks about standards, standard development processes, talks about AI standards, what’s needed, and concludes with recommendations on what’s needed to maintain U.S. leadership in development of the technical standards and

Trustworthy AI: Managing the Risks of Artificial Intelligence

147

recommendations very broadly is about strengthening research for development of scientifically valid standards, public-private partnership, to be able to do that research and build those foundations, and international cooperations for development of standards. I just also want to note, that plan was also developed in a stakeholderdriven effort with a lot of input from the community. Mr. MCNERNEY. Thank you. So what what extent is the United States already collaborating with the EU and other likeminded nations on developing standards for trustworthy AI? Ms. TABASSI. Multiple ways. One of them is by expert-to-expert scientists working on what we call pre-standardization research to actually provide the scientific foundations for the standards and then cooperation by to the standard meeting and seeing them through to become international standard, but also at the forum such as TTC and QUAD. Mr. MCNERNEY. Well, thank you. Mr. Crenshaw, I didn’t want to leave you out. Would the Chamber and presumably many U.S. businesses support the development of a United States AI regulatory law? Mr. CRENSHAW. I think, given the state of the technology, we believe it’s premature to get into prescriptive regulation. We support voluntary frameworks like we see at NIST. A few areas, though, I think, you know, we would like to see regulation is for things like consumer privacy. We’d like to see a national standard put in place. But at the same time, we want to make sure that the process at NIST can work itself out first before we start making any kind of determinations on regulation. And it’s also an issue, though, our own AI Commission is working through as well to make recommendations for. Mr. MCNERNEY. Thank you. My time has expired, and I’m going to call on Mr. LaTurner. You’re up for 5 minutes. Mr. LATURNER. Thank you, Mr. Chairman. I appreciate it. Ms. Singh, in your testimony, you talk about the need for policymakers to establish benchmarks for fairness when it comes to responsible AI, yet you also talked about how industry-specific and contextdriven artificial intelligence factors preclude standard-setting bodies from creating a one-size-fits-all metrics. In a context-specific field, how can Congress create meaningful regulation that ensures AI systems retain algorithmic fairness? Ms. SINGH. Thank you so much for that question. I think the work that NIST is doing is a good example of the public-private partnership that is needed to ensure that we are doing thoughtful policymaking and standards that

148

Committee on Science, Space, and Technology

are very context-specific. As I’ve stated previously, you know, in artificial intelligence, the question that we should be asking ourselves right now is how can governance and oversight keep up with the development of artificial intelligence? And so we believe that standards are going to be critical, especially as we think about transparency reporting. And transparency reporting, is going to be a complete view into the AI lifecycle that can help with benchmarking. Mr. LATURNER. What could we be doing differently with our— with Congress and the public-private partnerships? Do you have any recommendations on how we could be doing it better? Ms. SINGH. Yes, thank you so much for that question. You know, we’ve given some feedback to NIST on that. I think we have to really step back and think about the AI application, as well as what the impact to the stakeholders within that AI application is. And I think going back to context-centric metrics, as well as context-centric reporting requirements is one of the first steps we believe is going to help move this industry forward. Mr. LATURNER. How can developing responsible AI give the United States an economic and societal competitive advantage over other countries Ms. SINGH. Thank you. I think that is a fantastic question. We at Credo AI believe that responsible AI is a competitive advantage because it is not only going to help United States and the companies here deploy AI with confidence, but as we make sure that the standards that emerge which are aligned with our societal values, that is going to promote more consumer trust, which, as you can imagine, is going to further bolster our leadership in artificial intelligence. Mr. LATURNER. Thank you, Ms. Singh. Dr. Isbell, you state in your testimony that there are many occasions where tech workers cannot be certain how AI algorithms reach the correct answer, and these algorithms are known as, quote, black-box models. If for any reason these types of algorithms reach an incorrect or biased outcome like the ones you describe in your testimony, it can be nearly impossible to diagnose. If we want to solve the problem of black-box models by making an algorithm’s data set more transparent, then what countermeasures can we take to bolster AI security from hackers? To your knowledge, are there any examples of AI developers that have already—that are already addressing this issue? Dr. ISBELL. So there’s a great amount—there’s a large amount of work that’s being done in academia at the level of basic research to understand differential privacy, to understand how it is that people can interfere and break into the way that machine learning algorithms actually work. So there’s a lot

Trustworthy AI: Managing the Risks of Artificial Intelligence

149

of work. It’s in early stages, but a lot of great stuff is being done. How much of the— not a lot of that has necessarily been deployed in the systems that are out there now I think in large part because the incentives haven’t necessarily been there. What drives industry and drives the people who build these systems and deploy them to do—to touch on this is requirements that either through the market or through policy, that if they don’t do this, they’re simply not going to be able to deploy their systems and to have them used and adopted by large groups of people. So there’s a lot of work that’s been done out there, a lot of specific things. I would start with differential privacy, and there’s lots of researchers that have done great work on this. But at the end of the day, it’s really going to be about creating the incentives for people to want to take advantage of what we know in order to keep things secure. Mr. LATURNER. Thank you. Mr. Chairman, I yield back. Chairwoman STEVENS. Great. And with that, we’re going to hear from Mr. Beyer of the Commonwealth of Virginia for 5 minutes of questioning. Mr. BEYER. Thank you, Madam Chair, very much. And thank the witnesses for really interesting feedback. But also thank my colleagues, Democrats and Republicans, for some very good questions. Ms. Tabassi, I know you take on this tremendous task of managing, developing the AI Risk Management Framework. You heard from Mr. Crenshaw what the Chamber is doing with its commission. And I think you’ve heard pushback about how we’re not ready to have mandatory standards, that we’re still so early that we’re— we don’t want to overreact. We don’t want to overregulate. But at the same time is it not naive to think that we can make this voluntary indefinitely, that at some point there won’t be a need for clarity in terms of what is demanded and expected from businesses in AI? Ms. TABASSI. Thank you very much for that very thoughtful question, Congressman. So NIST AI RMF is a voluntary framework just like any other frameworks that NIST has developed. And the use and adoption of that, at least, I believe, would be based on the value that it provides. And another strength of the voluntary process that we are doing is based on the stakeholder engagement and stakeholder-driven process that we are following in development of this voluntary tool. It gives the opportunity to the whole community to provide their input, their comments. So by the end, the final tool would be a more effective resource that everybody that participate in development of that would have a buy-in in that.

150

Committee on Science, Space, and Technology

So by that, I think, having the value on using this and having buy-in because of participation in the process of developing it, would help with its adoption. NIST is a nonregulatory agency, and the things we put out are voluntary. Mr. BEYER. We know that, so thank you. I understand you’re nonregulatory and ultimately it will come back to us and then come back to us just based on dangers. Dr. Isbell, I was fascinated by your testimony. Because so much of what we talked about today is concern about biases, but you also had a wonderful paragraph about the upside of machine learning and artificial intelligence. Can you expand on that a little bit? It seems to me that we as human beings dramatically underestimate the potential for what artificial intelligence can bring humanity. Dr. ISBELL. So there’s a particular law, and I forget what—escapes me right now. But what the law says is that we overestimate the short term and we underestimate the long term. And I think that’s exactly what’s been happening with AI. There was a lot of hype back in the 1970’s and 1980’s before the AI winter with all the great changes that AI was going to bring to the world. They were wrong. They were overhyped. But it’s turned out that the impact that AI has had has been profound and far deeper than anything anyone even imagined back then. It has infiltrated every part of our life, and I use infiltrate in a positive way. We will be doing a better job of detecting when people are sick in ways that we were never able to do. We will be able to help people to make decisions they otherwise would not have ever been able to make. We will be able to connect with one another in ways that we have not been able to connect with one another before. And a large part of it will be because of computing, and it’ll be because of AI. It’s all very positive. The opportunities in front of us are huge, and it will take us—it will help us to solve big problems that we currently have a hard time thinking through and those problems over decades and even over centuries. The problem that we have, of course, is that we have to set up the incentives to allow people to do that, and we have to make certain that everyday people understand enough of what’s actually going on so that they can make rational decisions about how to use that technology in their own lives. Mr. BEYER. Dr. Isbell, I’d love to have a question for the record if you could find one of your research assistants to find out the name of that law. Dr. ISBELL. I will.

Trustworthy AI: Managing the Risks of Artificial Intelligence

151

Mr. BEYER. Dr. Vint Cerf told it to me 30 years ago, and I’ve always attributed it to him, but it probably has a deeper root. Dr. ISBELL. Absolutely. Mr. BEYER. Very powerful. Dr. Singh, one quick question. You know, we’ve been struggling with facial recognition technology on police bodycams. Now, is this something that you’re working on, too, that the notion that people of color, especially women of color, are picked up inaccurately much more frequently than others? Ms. SINGH. Thank you so much for that question. We at Credo AI work across a diverse range of applications, including facial recognition. And as I stated previously, I think any artificial intelligence that is not developed responsibly is going to impact all of us, and especially the marginalized communities, which in the past have been excluded because of gender, ethnicity, color, are at a higher disadvantage here. So building responsible AI is not just competitive advantage, but it is going to serve humanity really well. Mr. BEYER. Madam Chair, I yield back. Chairwoman STEVENS. Thank you. And with that, we’re going to hear from Mr. Gonzalez of Ohio for 5 minutes of questioning. Mr. GONZALEZ. Thank you, Chairwoman Stevens, Ranking Member Feenstra, for holding this hearing. Thanks to all the witnesses for your testimonies. Ms. Tabassi, we talked a little bit about the AI Risk Management Framework, and that was helpful. I’m curious, has China developed a similar tool? What is China doing specifically around this? Ms. TABASSI. Right. So I believe it was in 2017 that China put a very ambitious domestic AI plan out. To the best of my knowledge, there isn’t anything that they’re doing similar to the AI RMF. If they’re doing it domestically, I don’t know. But—yes. Mr. GONZALEZ. OK. Thank you. Mr. Crenshaw, I’m going to switch to you for a second. Unlike most countries that have a top-down, government-led approach, the United States has a bottoms-up, industry-led approach to standards setting, which I think is appropriate. We employ a voluntary system which relies on industry participation and leadership. This market-driven approach enables competition, ensures transparency, and takes advantage of consensus-building to drive us to the best possible outcomes. Can you explain how the U.S. approach to AI through the AI Risk Management Framework drives innovation?

152

Committee on Science, Space, and Technology

Mr. CRENSHAW. Well, I think it’s interesting to know, during one of our hearings, we actually had one of the cochairs of the National AI Advisory Committee come testify, Miriam Vogel. And she said the reason we needed to maintain leadership in this country is because we have a brand of trust compared to other countries. And it’s important that we have standards in place that are voluntary, that will be adaptable to this new and developing technology but at the same time will look at things like risk. And it’s important that we have real firm guidance in place. And another—I think, as I said before as well, when it comes to international standards bodies, we need to make sure that the United States is well-represented. The CHIPS and Science Act actually helped provide funding to ensure we can participate in that space. But, you know, at the same time, too, as companies look at things like developing implementation for compliance or following guidelines, if they go out there and say we’re following this guideline and then they’re found not to be, there is some teeth there. Mr. GONZALEZ. Yes. Mr. CRENSHAW. So there are agencies that can enforce there as well. Mr. GONZALEZ. Great. Mr. CRENSHAW. So there is great trust to be had by establishing leadership and trust against other countries. Mr. GONZALEZ. Dr. Isbell, with your role on campus as a Professor and Dean, what do you believe the appropriate role of the university is—are in shaping the future of AI? Dr. ISBELL. Twofold. One is to do research. We have one of the best systems in the world around basic research. Our research ones are amazing. And all the way down to our research twos and even our community colleges are able to bring people in and to think about and engage in the conversation around AI or any other large, important issue. So the research is important, and maintaining and supporting that is important. But the second and perhaps the most obvious is the fundamental mission, which is educating people, not just educating the people who are going to do the research, but I think importantly, and especially when it comes to AI and machine learning, is educating everyone else who is not going to do AI and machine learning research but will be affected by it, who will be adjacent to it, and will be far away. As I told my son who’s deeply into history, you will not be able to get a degree in history in 5 years without knowing machine learning and AI because it’s still going to be data-driven. And so our responsibility is to make certain that everyone is a part of that conversation.

Trustworthy AI: Managing the Risks of Artificial Intelligence

153

Mr. GONZALEZ. Great. And then I agree 100 percent on the research point, actually, on both points. But, you know, one thing we talk about a lot on this Committee is how do we get the research— the incredible research that’s happening on our university campuses out into the public space and then driving innovation in the private sector? So what do you think we need to be doing to have a—I’ll just call it a more robust sort of flywheel of research taking place on college campuses, leads to innovation, leads to private companies, et cetera, et cetera? Dr. ISBELL. So we actually do pretty well with that, I think, but I think the biggest problem right now is that there’s a mismatch between what the company—pick whatever your favorite company is—wants to do in the next 6 months to a year versus what the basic research that’s looking out 5 or 10 years actually is. Support through organizations like NSF, for example, to help partner with those companies, to partner with industry to help do the basic research, universities, I think, is the best way to get that translational work done from the lab out into the world. And when it works, it works very well. Mr. GONZALEZ. Thank you. I yield back. Chairwoman STEVENS. Thank you. With that, we’ll hear from Congressman Sherman of California for 5 minutes of questioning. Mr. SHERMAN. Thank you, and thank you for allowing me to participate in this Subcommittee’s hearing. Without objection, I’d like to enter into the record an article I wrote 22 years ago, “Engineered Intelligence: Creating Our Successors’ Species.” My line of questioning is going to be about things that won’t affect us until the second half of this century. But since they relate to whether humankind will continue to be in domination of the planet Earth, they’re important. We’re—right now, the computer engineers and the bioengineers are racing to create a new level of intelligence. And the last time there was a higher level, a new level of intelligence appeared on the planet is when our ancestors said hello to Neanderthal. It did not work out well for Neanderthal. So my focus is on whether we’re going to see artificial intelligence that has general intelligence, self-awareness, and what I call the ambition, or survival instinct, or care. And that third thing I should go into more, I tend to think that our successor species would be biological because even the dumbest worm seems to care if you try to turn it off or kill it, whereas the smartest computers we have so far don’t care if you unplug them. So my concern is what are we doing to prevent or monitor for general intelligence, self-awareness, and ambition or survival instinct? Or are we just

154

Committee on Science, Space, and Technology

going to ignore those issues and focus on things that affect us in the next decade? Ms. Tabassi? Ms. TABASSI. Thank you very much, Congressman, for the question. It’s hard to determine when or if we can reach or the community can reach to an artificial general intelligence. I will say that that’s—— Mr. SHERMAN. Well, I think we’re going to get there someday. Ms. TABASSI. Right. Mr. SHERMAN. We just don’t know—— Ms. TABASSI. Very good, very good. So we don’t know when we’re going to get there. So from the NIST point of view, we think that that’s one reason to work on foundational principles. That’s why it’s now timely—— Mr. SHERMAN. Is anybody doing any technical research about how we can get very useful computers, that we somehow put something in there, a governor if you will, that prevents general intelligence or prevents selfawareness, or prevents ambition and caring? Is anybody doing the research as to how we can get what we want without getting what we don’t want? Ms. TABASSI. I’m not aware of that research being done at our laboratory at NIST, across the academia, and the community. I don’t know. Thank you for the question. Mr. SHERMAN. I’ll ask the other witnesses. Is anybody aware of us trying to prevent, as we try to harvest the benefits of artificial intelligence, the creation of an ambitious, self-aware computer that may very well decide that we’re irrelevant to this planet? Is anybody figuring out how to do that, or is it just an issue we’re all aware of but aren’t really trying to confront? Does anyone just— yes, Mr.—yes, Doctor? Dr. ISBELL. So I guess the—yes, and thank you for the question. Actually, you know, one of the reasons I got into AI in the first place were these what I’d consider pretty existential and philosophical questions around what does it mean to build intelligence? I think the answer is that people discuss these issues all the time. They try to figure it out, they try to work it through. We don’t have any large research, at least that I’m aware of, any large research agendas around preventing the issue—preventing general intelligence in part because we have no idea how to get there from here. And I think one of the things that I would leave—— Mr. SHERMAN. What about those two other issues, how to prevent selfawareness, how to monitor for self-awareness, how to prevent ambition or survival instinct, how to monitor for survival instinct? Dr. ISBELL. I don’t think it’s done in those terms. I don’t think it’s done in those terms. It’s done in simpler terms around preventing harm.

Trustworthy AI: Managing the Risks of Artificial Intelligence

155

Mr. SHERMAN. Well, we’re going to concentrate on the harm that could occur in the next decade—— Dr. ISBELL. That’s right. Mr. SHERMAN [continuing]. The Nation or artists that lose their creativity and the benefits of their creativity, and it doesn’t seem like anybody’s worried about the problems we’ll confront in the second half of this century. And with that, I yield back. Chairwoman STEVENS. Great. And with that, we’re going to go to another round of questions because we’re just having so much fun here. And the Chair is going to recognize herself for 5 minutes. I think this question about where and how we’re determining the ethics is very important. Obviously, we have so much respect for NIST and an understanding of the role that standards play. We could go philosophical again and ask our standards, ethics, and how the ethics arrive out of standards that come from rigorous processes that are inputted by—you know, we talked about the companies, we’ve heard from Dr. Isbell about the people, the people element that needs to get involved with the standards. But, Dr. Isbell, some universities are already including ethics as a curriculum and long have. You go into a philosophy department, you’re going to get an ethics course. Hopefully, people take it. But ethics as a curriculum requirement for computer science degrees in particular, a great start, but it’s often obviously sometimes a separate course and may not be directly connected to what students are learning in other courses. You’ve changed your approach at Georgia Tech, and so I just wondering if you could elaborate on what you’re doing to integrate ethics education and how you’re assessing its effectiveness. And I also just—because that’s a question I know you can answer it, but I just really want to applaud you for a segment in your testimony that I encourage everyone to look at where you said computing has long been an intellectual wild west where things change so fast that the priority was always to fix—to find what’s next, to find the better solution. Now, we’ve succeeded in finding solutions so good that they are intertwined in nearly every area of our personal lives and communities. So can our laws move fast enough? Can our ethics move fast enough? And where and how do we find this arising? Thank you. Dr. ISBELL. Sure. Thank you for the question. I really appreciate it. I will say that, you know, people in my field have spent 40, 50 years trying to convince everyone that what we did was really important, and it turns out, we were right. And then what we’re living with now are the consequences of having been right.

156

Committee on Science, Space, and Technology

So when it comes to ethics and responsibility, I think the—you know, Georgia Tech, we’ve had that as a requirement for CS going back at least about 30 years. But what we had done wrong—and not just us, but I think the way that we approached this—is that we treat it, as you say, a separate class, something that gets stapled on at the end. It’s a requirement. Nobody takes it till their last semester. It doesn’t get integrated into the rest of the curriculum and it can’t. So one of the things that we did recently is we kept it as a requirement, and we made it a prerequisite for our junior yearlong design classes. So by the time you’re a sophomore, you know just enough to be dangerous. You’re at a place where you’re being forced to think carefully about the consequences of the systems that you build, and then you’re asked to build such a big system. This is before you take Intro to AI. This before you take Intro to Machine Learning. This is before you take Introduction to Cybersecurity and Privacy. So it puts you in a place where the people further down the chain can actually now ask you the direct questions that they couldn’t do before because you wouldn’t have the language or the experience to be able to do that. That is what’s important. When we claim that something is important, we have to operationalize it in our curriculum in the way that we teach people from the very beginning and not toward the end, which is the natural thing to do if you aren’t very careful about how important you think that it is. Chairwoman STEVENS. And certainly to Mr. Crenshaw, I’m sure you have some some thoughts about this as well. And, you know, we applaud the the point about, hey, we want to drive a—you know, American leadership of what we’re doing with artificial intelligence. And thank you, Ms. Singh, by the way. I’ve just so thoroughly enjoyed your—not only your testimony, but the answers to your questions. But how do we balance these things out, right? You know, we sometimes see, you know, too much of a good thing, per se. And we don’t—you know, we like standards. We’re doing standards. You’ve said you like the risk management. But, you know, in some ways, right, we see companies getting pushback because they haven’t self-regulated and the ethics component isn’t there. And so, you know, where and how do we find that balance? And maybe that’s articulated through boards. Which—how does that populate? And maybe Ms. Singh can chime in, too. Mr. CRENSHAW. I think it’s critically important, one note to make, that we have the critical decisionmakers in companies involved in this process as well. Not only do technologists have a role, but C-suite does as well. And also, you know, we need more education out there about the need to build in ethical

Trustworthy AI: Managing the Risks of Artificial Intelligence

157

AI into standards for companies and how they operate. I’ve talked to some companies that are actually developing their own ethical frameworks and have full-time ethicists who are being brought on. We had a hearing actually at the Cleveland Clinic about 4 months ago in which they’ve now brought on an ethicist as well, as they’re using AI to treat their patients. So it’s important, and I think companies are beginning to see this. Chairwoman STEVENS. Yes. Ms. SINGH. Thank you, Chairwoman. I think, today, we’ve established that AI is not a technical problem. It’s a sociotechnical problem that really needs multistakeholder perspective and viewpoints. So I totally agree that there is a need for education. There’s a need for involvement from multiple stakeholders. But if I may, I think the companies we work with, they’re still struggling with what does good look like. And this is where we believe that government has a critical role to play in thoughtful policymaking and in these standards to at least give that context to these companies because everyone right now, even if they’re trying to self-regulate, do not know what does good look like. So our ask right now is really making sure that there is more transparency around how these systems are built and deployed. Chairwoman STEVENS. Yes, right. And there’s also certainly examples from throughout history where the notion of good has gotten it wrong. But with that, why don’t I turn it over to Mr. Feenstra, for 5 minutes of questioning. Thank you. Mr. FEENSTRA. Thank you, Madam Chair. I’m so glad that we could have an extra round of questions. And Dr. Isbell, thank you again for all your comments. I’ve been enjoying listening to you. And, as academics, to me, the challenge is—I finished my dissertation on maternity healthcare in rural America. And the challenge is, you know, we talk about ethics, but there’s this fine line of how we access data and the barriers that are put on to try to get the data. And so how do we thread that needle of, you know, there’s a need to have the data and to create trustworthy AI systems, and yet there’s that balancing act of ethics. Can you dive into that a little bit? Dr. ISBELL. I mean, I do have my opinions about how to solve all problems around ethics, which is a very deeply difficult question. I think the best way of thinking about it is to help people to articulate explicitly what it is that—what the tradeoffs are and where they want to live in that space of tradeoffs. If people can understand the tradeoffs, they can make informed decisions. I guarantee you that, first off, there’s more data out about you out there in the world than you have ever imagined and that people know more

158

Committee on Science, Space, and Technology

about you than you wish that they did, and that could be a good thing because one day, it may save your life. On the other hand, it’s a lot—it’s your privacy, and it’s who you are, and people shouldn’t just be able to get access to that data just because they can. Mr. FEENSTRA. Is there any data, though, that you’d say that would be beneficial that, you know, you look at and say, OK, this is captive that we can’t get at that might be helpful as we move into trustworthiness and AI? Dr. ISBELL. I think that that’s a conversation that involves, as we’ve been saying all along, all the stakeholders who are involved. I will add one thing, though, which is, although I think that bottom-up thinking is good and it’s something that’s driven us to innovation, it says right there in this chamber that, “Where there is no vision, the people perish.” Mr. FEENSTRA. That’s right. Dr. ISBELL. And the vision has to come from elected officials, it has to come from government, and it has to be a conversation about where it is we agree we want to go. Mr. FEENSTRA. Yes, I agree. Thank you, very, very good and thoughtful words. Ms. Singh, very intrigued by what your organization does. So if you look at how we build the appropriate safety and security into products, do you do you see a role in government? Or how do we incentivize going down this path, especially in the private sector? I mean, I think the private sector has some accountability in going down this path. But do you see anything that we can do? You know, we can put parameters, I get that. But we also, to me, have to do something to allow people to say I want to. Do you have any thoughts on that? Ms. SINGH. Thank you so much for that question because I certainly do have many thoughts on it. But one that I would love to reemphasize here is the companies we work with right now, they are recognizing the importance of transparency reporting and disclosures because that transparency is helping them build trust with the consumers and truly get that competitive advantage. While one of the reasons that these companies are not sharing these transparency reports broadly is because they don’t know how their competitors or others in the market stack up to it. Mr. FEENSTRA. Yes. Ms. SINGH. So at Credo AI, we are big proponents of you know, the government coming up with standards that cannot only mandate disclosures, but I think we will—it will propel a thoughtful benchmarking across these AI applications.

Trustworthy AI: Managing the Risks of Artificial Intelligence

159

Mr. FEENSTRA. Yes, I mean, that’s a great thought, that you can be protective in your data, but if we say—if the government says, wait a minute, this is universal data that everybody could use, that can be a gamechanger a little bit. Again, ethics plays a vital role in that. Thank you. With that, I am out of time. Thank you. Chairwoman STEVENS. Yes. And we’ll hear from Dr. McNerney for 5 minutes of additional questioning. Mr. MCNERNEY. Well, good. Now that you’re back, I can thank you for having this hearing. It’s great. And again, I want to thank the witnesses. Ms. Singh, I feel bad about leaving you out first round, but I have two big concerns about AI, and I’ll throw the first one to you. The first one is—and machine learning, which has really overtaken AI—that AI will overtake an increasing number of decisionmaking from humans, pushing us more and more into irrelevance and sort of dehumanizing us. What can we do to prevent that, you know, pushing us aside with the decisionmaking capability of AI? Ms. SINGH. Thank you so much for that question. You know, with any disruptive technology, be it AI, we see there are huge economic impacts. And we see that in, you know, changes in work force, the role that humans will play in the future of work. But as we step back and think about it, I think we have a great opportunity right now to invest more in education. As Dr. Isbell mentioned, I’m excited his son is going to be getting educated on AI because I think that’s going to be critical. But thinking about reskilling and upskilling in this age of AI is going to give us a competitive edge. Mr. MCNERNEY. So that’s a great answer, educate more people so that we can utilize the AI in a more productive way than letting it make decisions for us. That’s basically what you’re saying, right? Ms. SINGH. Yes, absolutely. Mr. MCNERNEY. Very good. OK. Thank you. The next one, I guess I’ll go to Dr. Isbell again. AI—one of my other concerns about AI is that it’s being used to monitor humans and our behaviors, our habits, especially either in autocratic nations or by businesses that would like to be able to influence our decisionmaking in terms of the way we spend our money. What do you think is a way to mitigate that issue? Dr. ISBELL. So first off, you’re right, that’s exactly what happened, and it’s been happening for a long time. Black Friday is a thing that happens because it gets people to buy things, right, so this is hardly new. What has happened is computing and AI has made it much more efficient and easier to deploy.

160

Committee on Science, Space, and Technology

My answer to that—I have two. One is that it’s education. It’s making people aware of what’s happening and allowing them to make reasonable decisions. The other is that there are policies and there are technical mechanisms that we can employ. We can encourage people to develop and to deploy that will allow them—that will allow people to understand what is being happened—what is happening to them. You are in fact being studied. You—your data is in fact predicting this behavior, and you’re doing this. And giving people the tools, not just the stuff that they know—the education they learn on their own but the technical tools that allow others to monitor the monitors, that is a place that has a lot of potential and not one that we’ve invested a great deal into. Mr. MCNERNEY. Well, the French postmodernists in the 1930’s— 1930’s and 1940’s were sort of warning us that the government would be getting more and more information about us and being able to use that information to control our political decisionmaking as individuals, and that’s sort of what I was worried about. And now what we’re seeing with social media is that these—some of these companies are using information to direct people into political bubbles that may advocate violence or other sorts of extreme behavior. And I think that’s one of the issues we have—that I’m having with how do we tamp that down? Do you have any recommendations, Mr. Crenshaw, on how we could go about doing that? Mr. CRENSHAW. Well, I think when it comes to anytime we’re looking at the use of algorithms, we have to look at it from a risk-based approach. And I think we also need to realize that there are some benefits also to artificial intelligence that we’ve seen. And, you know, one of the things I wanted to note is that what we’ve learned is that the more people know about AI, the less scared or concerned they are about it. And I think that’s why education about artificial intelligence is so important. But companies also need to build in ethics and ethical decisionmaking into their AI as well, too. And we see companies that are leading in this space. Mr. MCNERNEY. But it’s hard to regulate that. And I’m thrilled that we’re hearing about companies hiring ethicists, but how do we get that as a part of the corporate mindset that, you know, we need to do this in the future? So—it’s not something we can regulate I don’t think. Mr. CRENSHAW. I agree that C-suite needs to be involved. It needs to be part of corporate culture is building in ethics into artificial intelligence. But at the same time, I think with the work we’re seeing at agencies like NIST are getting us in the right direction toward where we want to be. Mr. MCNERNEY. Thank you. I yield back.

Trustworthy AI: Managing the Risks of Artificial Intelligence

161

Chairwoman STEVENS. Thank you. And with that, I don’t believe we have any other questions. So we’re going to bring the hearing to a close. Do we have one more? Oh, did Baird come back? OK, hold on. I’m not closing. Where is he? Dr. Baird? He’s not coming? Well, we got questions for the record, too. OK. We’re prepared to close. All right. Well, we’re prepared to close. But honestly, we’re not going to close the door on the conversation because this has only brought up more questions. And in fact, we could probably have a hearing on a couple of different subsets that we discussed today. I believe with this Committee, and as Mr. Gonzalez who, you know, we have been so privileged to work with during his couple of terms here in the Congress, mentioned, you know, taking research applications, commercializing them, recognizing where our economy filters in. We also recognize that we’re in a leadership moment, and this is—you know, we have been deeply privileged to have Dr. McNerney through his tenure, his mighty tenure in the Congress on this Committee, and he’s so, so dedicated to this Committee, but this is a leadership moment for the United States of America. And we are going to shape how the world’s going to go on this. We want to be able to shape how the world’s going to go, and we’ve got to be prepared to do some of the deeper work. It’s not just the question of harm, but it’s also the questions of, you know, the meta challenges that come before us that are somewhat brought on by AI. It’s forcing us to be more collaborative. It is forcing us to come together in ways that we didn’t last century. I left out that I was working at a digital research lab before coming to this body and we did the taxonomy, Mr. Crenshaw, on the IoT (Internet of things) jobs, you know, how companies are going to have to hire. We did this in partnership with Manpower Group and a host of other industry and academic partners. Digital ethicists came up. That was one of the job profiles we came up with. That was just 5, 6 years ago. And I mentioned Turing test, and we were so possessed when I was in school by the Turing test, like we thought that was going to be the question. And Mr. Sherman sort of got to that in his questions, you know, are we worried about replacing humanity? No, we are talking about what Mr.—or Dr. Isbell said in his testimony, culture, changing culture and how we influence culture through the laws we pass in this body. And we have been addressing some meta challenges. I didn’t have the privilege of having Mr. Feenstra here last term, but I know we would have been working together on the trade deal, the USMCA (United States-MexicoCanada Agreement). You have unions and the Chamber came together to pass USMCA. This time around, we passed Inflation Reduction Act. For the first

162

Committee on Science, Space, and Technology

time ever, you know, we’re dealing with climate. You’ve got the environmental groups and the industry partners, my automakers saying they want the same thing. So these digital applications, these complex artificial intelligence systems that we’re putting into place, they’re asking us to come together. So, Ms. Tabassi, I—you know, we’re going to come back to you because we just—we think NIST solves all of our problems, the mighty agency that can with a little. And we’re excited about that, and we—and we’re going to come visit you and we’re going to talk about how you’re stitching together with your risk management what Dr. Isbell said and what Ms. Singh is saying. Who’s at the table? Who’s at the table? You know, we solve some problems in ones and twos, and then we look at some of the broader challenges. But overall, we’re wildly optimistic. We’re working on the vision, and we’re excited that we had this time together today. Hopefully, the rest of the Congress tunes in on CSPAN later. But with that, we’re going to close it. We’re going to leave the record open for a couple of weeks for additional questions for the record, and our witnesses are excused. Thank you. [Whereupon, at 12:29 p.m., the Subcommittee was adjourned.]

Trustworthy AI: Managing the Risks of Artificial Intelligence

Appendix I: Answers to Post-Hearing Questions Responses by Ms. Elham Tabassi

163

164

Committee on Science, Space, and Technology

Responses by Mr. Jordan Crenshaw

Trustworthy AI: Managing the Risks of Artificial Intelligence

165

166

Committee on Science, Space, and Technology

Trustworthy AI: Managing the Risks of Artificial Intelligence

167

168

Committee on Science, Space, and Technology

Appendix II: Additional Material for the Record Engineered Intelligence: Creating a Successor Species, Congressman Brad Sherman, Statement for the Committee on Science, Space, & Technology, May 17, 2019 I believe that the impact of science on this century will be far greater than the enormous impact science had on the last century. As futurist Christine Peterson notes: “If someone is describing the future 30 years from now and they paint a picture that seems like it is from a science fiction movie, they might be wrong. But, if someone is describing the future a generation from now and they paint a picture that doesn’t look like a science fiction movie, then you know they are wrong.” We are going to live in a science fiction movie, we just don’t know which one. There is one issue that I think is more explosive than even the spread of nuclear weapons: engineered intelligence. By that I mean, the efforts of computer engineers and bio-engineers who may create intelligence beyond that of a human being. “We are going to live in a science fiction movie, we just don’t know which one.” In testimony at the House Science Committee92, the consensus of experts testifying was that in roughly 25 years we would have a computer that passed the Turing Test93, and more importantly, exceeded human intelligence. As we develop more intelligent computers, we will find them useful tools in creating ever more intelligent computers, a positive feedback loop. I don’t know whether we will create the maniacal Hal from 2001, or the earnest Data from Star Trek ‒ or perhaps both. There are those who say don’t worry, even if a computer is intelligent and malevolent ‒ it is in a box and it cannot affect the world. But I believe that there are those of our species who sell hands to the Beelzebub, in return for a good stock tip. 92

On April 9, 2003, the U.S. House Committee on Science and Technology, held a hearing titled “The Societal Implications of Nanotechnology.” 93 If a human receives a text message and cannot determine if it was composed by a computer or a human, then the computer has passed the Turing Test.

Trustworthy AI: Managing the Risks of Artificial Intelligence

169

I do draw solace from the fact that just because a computer is intelligent, or even self-aware, this does not mean that it is ambitious. By ambitious, I mean possessing a survival instinct together with a desire to affect the environment so as to ensure survival, and often a desire to propagate or expand. My washing machine does not seem to care whether I turn it off or not. My pet mouse does seem to care. So even a computer possessing great intelligence may simply have no ambition, survival instinct, or interest in affecting the world. DARPA94 is the government agency on the cutting edge of supercomputer research. I have urged DARPA to develop computer systems designed to maximize the computer’s utility, while avoiding self-awareness, or at least ambition. Bio-engineers may be able to start with human DNA and create a 2,000 pound mammal with a 300 pound brain designed to beat your grandkids on the LSAT. No less troubling, they might start with canine DNA and create a mammal with sub-human intelligence, and no civil rights. DNA is inherently ambitious. Those microbes which didn’t seek to survive or replicate, didn’t. Birds seem to care whether they or their progeny survive, and they seek to affect their environment to achieve that survival. In any case, you have the bio-engineers and the computer engineers both working toward new levels of intelligence. I believe in our lifetime we will see new species possessing intelligence which surpasses our own. The last time a new higher level of intelligence arose on this planet was roughly 50,000 years ago. It was our own ancestors, who then said hello to the previously most intelligent species, Neanderthals. It did not work out so well for the Neanderthals. “Will our successors be carbon-based or silicon-based?” I used to view this as a contest between the bio-engineers and the computer engineers (or if you use the cool new lingo, wet nanotechnology and dry nanotechnology), in an effort to develop a new species of superior intelligence. I felt that the last decision that humans would make would be

94

The Defense Advanced Research Projects Agency (DARPA).

170

Committee on Science, Space, and Technology

whether our successors are carbon-based or silicon-based:95 the product of bioengineering or of computer engineering. Now I believe we are most likely to see combinations that will involve nature, computer engineering, and bio-engineering: humans with pharmaceutical intelligence boosters; DNA enhancements; computer-chip implants; or all three. First, this will be used to cure disease, then to enhance human capacity. The enhanced-human will precede the trans-human. Now how should we react to all of this? It is important that we benefit from science, even as we consider its more troubling implications. I chaired the House Subcommittee on Nonproliferation which deals with the only other technologies that pose an existential threat to humankind, namely the proliferation of nuclear and biological weapons. The history of nuclear technology is instructive. On August 2, 1939, Einstein sent Roosevelt a letter saying a nuclear weapon was possible; six years later, nuclear technology literally exploded onto the world scene. Only after society saw the negative effects of nuclear technology, did we see the prospects for nuclear power and nuclear medicine. The future of engineered intelligence will be different. The undeniable benefits of computer and DNA research will arrive long before the problematic possibilities. Their introduction will be gradual, not explosive. Fortunately, we will have far more than six years to consider the implications ‒ unless we choose to squander the next few decades. My fear is that our philosophers, ethicists and society at large, will ignore the issues that will inevitably present themselves until they actually present themselves. And these issues require more than a few years of thought.96 I am confident that if we plan ahead we can obtain the utility of supercomputers, and the benefits of bio-engineering, without creating new levels of intelligence. We can then pause and decide whether we in fact wish to create a new intelligent species or two. Finally, I would quote Oliver Wendell Holmes who said 100 year ago, “I think it not improbable that man, like the grub that prepares a chamber for the

95

96

Despite the fact that supercomputers may not use chips with silicon substrate, for these purposes, we’ll still refer to computer chips as “silicon.” This issue is discussed in “Brave New World War” by Jamie Metzl. Published in Issue 8, Spring 2008, Democracy: A Journal of Ideas.

Trustworthy AI: Managing the Risks of Artificial Intelligence

171

winged thing it never has seen but is to be ‒ that man may have cosmic destinies that he does not understand.”97 Likewise, it is possible that our grandchildren ‒ or should I say “our successors” ‒ will have less resemblance to us than a butterfly has to a caterpillar. Our best minds in philosophy, science, ethics and theology ought to be focused on this issue. Now.

97

Oliver Wendell Holmes. “Law and the Court,” speech at the Harvard Law School Association of New York, 15 February 1913.

Chapter 3

Blueprint for an AI Bill of Rights: Making Automated Systems Work for the American People, October 2022* White House Office of Science and Technology Policy (OSTP) Foreword Among the great challenges posed to democracy today is the use of technology, data, and automated systems in ways that threaten the rights of the American public. Too often, these tools are used to limit our opportunities and prevent our access to critical resources or services. These problems are well documented. In America and around the world, systems supposed to help with patient care have proven unsafe, ineffective, or biased. Algorithms used in hiring and credit decisions have been found to reflect and reproduce existing unwanted inequities or embed new harmful bias and discrimination. Unchecked social media data collection has been used to threaten people’s opportunities, undermine their privacy, or pervasively track their activity— often without their knowledge or consent. These outcomes are deeply harmful—but they are not inevitable. Automated systems have brought about extraordinary benefits, from technology that helps farmers grow food more efficiently and computers that predict storm paths, to algorithms that can identify diseases in patients. These tools now drive important decisions across sectors, while data is helping to *

This is an edited, reformatted and augmented version of a white paper published by the White House Office of Science and Technology Policy, October 2022.

In: Artificial Intelligence Editor: Gary Dalton ISBN: 979-8-89113-493-5 © 2024 Nova Science Publishers, Inc.

174

White House Office of Science and Technology Policy (OSTP)

revolutionize global industries. Fueled by the power of American innovation, these tools hold the potential to redefine every part of our society and make life better for everyone. This important progress must not come at the price of civil rights or democratic values, foundational American principles that President Biden has affirmed as a cornerstone of his Administration. On his first day in office, the President ordered the full Federal government to work to root out inequity, embed fairness in decision-making processes, and affirmatively advance civil rights, equal opportunity, and racial justice in America.1 The President has spoken forcefully about the urgent challenges posed to democracy today and has regularly called on people of conscience to act to preserve civil rights— including the right to privacy, which he has called “the basis for so many more rights that we have come to take for granted that are ingrained in the fabric of this country.”2 To advance President Biden’s vision, the White House Office of Science and Technology Policy has identified five principles that should guide the design, use, and deployment of automated systems to protect the American public in the age of artificial intelligence. The Blueprint for an AI Bill of Rights is a guide for a society that protects all people from these threats—and uses technologies in ways that reinforce our highest values. Responding to the experiences of the American public, and informed by insights from researchers, technologists, advocates, journalists, and policymakers, this framework is accompanied by a technical companion—a handbook for anyone seeking to incorporate these protections into policy and practice, including detailed steps toward actualizing these principles in the technological design process. These principles help provide guidance whenever automated systems can meaningfully impact the public’s rights, opportunities, or access to critical needs.

1

The Executive Order on Advancing Racial Equity and Support for Underserved Communities Through the Federal Government. https://www.whitehouse.gov/briefing-room/ presidential-actions/2021/01/20/executiveorder-advancing-racial-equity-and-support-forunderserved-communities-through-the-federal-government/. 2 The White House. Remarks by President Biden on the Supreme Court Decision to Overturn Roe v. Wade. Jun. 24, 2022. https://www.whitehouse.gov/briefing-room/speechesremarks/2022/06/24/remarks-by-president-biden-on-the-supreme-court-decision-tooverturn-roe-v-wade/.

Blueprint for an AI Bill of Rights

175

About This Framework The Blueprint for an AI Bill of Rights is a set of five principles and associated practices to help guide the design, use, and deployment of automated systems to protect the rights of the American public in the age of artificial intel-ligence. Developed through extensive consultation with the American public, these principles are a blueprint for building and deploying automated systems that are aligned with democratic values and protect civil rights, civil liberties, and privacy. The Blueprint for an AI Bill of Rights includes this Foreword, the five principles, notes on Applying the The Blueprint for an AI Bill of Rights, and a Technical Companion that gives concrete steps that can be taken by many kinds of organizations—from governments at all levels to companies of all sizes—to uphold these values. Experts from across the private sector, governments, and international consortia have published principles and frameworks to guide the responsible use of automated systems; this framework provides a national values statement and toolkit that is sectoragnostic to inform building these protections into policy, practice, or the technological design process. Where existing law or policy—such as sectorspecific privacy laws and oversight requirements—do not already provide guidance, the Blueprint for an AI Bill of Rights should be used to inform policy decisions.

Listening to the American Public The White House Office of Science and Technology Policy has led a yearlong process to seek and distill input from people across the country—from impacted communities and industry stakeholders to technology developers and other experts across fields and sectors, as well as policymakers throughout the Federal government—on the issue of algorithmic and data-driven harms and potential remedies. Through panel discussions, public listening sessions, meetings, a formal request for information, and input to a publicly accessible and widely-publicized email address, people throughout the United States, public servants across Federal agencies, and members of the international community spoke up about both the promises and potential harms of these technologies, and played a central role in shaping the Blueprint for an AI Bill of Rights. The core messages gleaned from these discussions include that AI has transformative potential to improve Americans’ lives, and that preventing

176

White House Office of Science and Technology Policy (OSTP)

the harms of these technologies is both necessary and achievable. The Appendix includes a full list of public engagements.

Blueprint for an AI Bill of Rights Safe and Effective Systems You Should Be Protected from Unsafe or Ineffective Systems Automated systems should be developed with consultation from diverse communities, stakeholders, and domain experts to identify concerns, risks, and potential impacts of the system. Systems should undergo pre-deployment testing, risk identification and mitigation, and ongoing monitoring that demonstrate they are safe and effective based on their intended use, mitigation of unsafe outcomes including those beyond the intended use, and adherence to domain-specific standards. Outcomes of these protective measures should include the possibility of not deploying the system or removing a system from use. Automated systems should not be designed with an intent or reasonably foreseeable possibility of endangering your safety or the safety of your community. They should be designed to proactively protect you from harms stemming from unintended, yet foreseeable, uses or impacts of automated systems. You should be protected from inappropriate or irrelevant data use in the design, development, and deployment of automated systems, and from the compounded harm of its reuse. Independent evaluation and reporting that confirms that the system is safe and effective, including reporting of steps taken to mitigate potential harms, should be performed and the results made public whenever possible.

Algorithmic Discrimination Protections You Should Not Face Discrimination by Algorithms and Systems Should Be Used and Designed in an Equitable Way Algorithmic discrimination occurs when automated systems contribute to unjustified different treatment or impacts disfavoring people based on their race, color, ethnicity, sex (including pregnancy, childbirth, and related medical conditions, gender identity, intersex status, and sexual orientation), religion, age, national origin, disability, veteran status, genetic information, or any

Blueprint for an AI Bill of Rights

177

other classification protected by law. Depending on the specific circumstances, such algorithmic discrimination may violate legal protections. Designers, developers, and deployers of automated systems should take proactive and continuous measures to protect individuals and communities from algorithmic discrimination and to use and design systems in an equitable way. This protection should include proactive equity assessments as part of the system design, use of representative data and protection against proxies for demographic features, ensuring accessibility for people with disabilities in design and development, pre-deployment and ongoing disparity testing and mitigation, and clear organizational oversight. Independent evaluation and plain language reporting in the form of an algorithmic impact assessment, including disparity testing results and mitigation information, should be performed and made public whenever possible to confirm these protections.

Data Privacy You Should Be Protected from Abusive Data Practices via Built-In Protections and You Should Have Agency over How Data About You Is Used You should be protected from violations of privacy through design choices that ensure such protections are included by default, including ensuring that data collection conforms to reasonable expectations and that only data strictly necessary for the specific context is collected. Designers, developers, and deployers of automated systems should seek your permission and respect your decisions regarding collection, use, access, transfer, and deletion of your data in appropriate ways and to the greatest extent possible; where not possible, alternative privacy by design safeguards should be used. Systems should not employ user experience and design decisions that obfuscate user choice or burden users with defaults that are privacy invasive. Consent should only be used to justify collection of data in cases where it can be appropriately and meaningfully given. Any consent requests should be brief, be understandable in plain language, and give you agency over data collection and the specific context of use; current hard-to-understand notice-and-choice practices for broad uses of data should be changed. Enhanced protections and restrictions for data and inferences related to sensitive domains, including health, work, education, criminal justice, and finance, and for data pertaining to youth should put you first. In sensitive domains, your data and related inferences

178

White House Office of Science and Technology Policy (OSTP)

should only be used for necessary functions, and you should be protected by ethical review and use prohibitions. You and your communities should be free from unchecked surveillance; surveillance technologies should be subject to heightened oversight that includes at least pre-deployment assessment of their potential harms and scope limits to protect privacy and civil liberties. Continuous surveillance and monitoring should not be used in education, work, housing, or in other contexts where the use of such surveillance technologies is likely to limit rights, opportunities, or access. Whenever possible, you should have access to reporting that confirms your data decisions have been respected and provides an assessment of the potential impact of surveillance technologies on your rights, opportunities, or access.

Notice and Explanation You Should Know That an Automated System Is Being Used and Understand How and Why It Contributes to Outcomes That Impact You Designers, developers, and deployers of automated systems should provide generally accessible plain language documentation including clear descriptions of the overall system functioning and the role automation plays, notice that such systems are in use, the individual or organization responsible for the system, and explanations of outcomes that are clear, timely, and accessible. Such notice should be kept up-to-date and people impacted by the system should be notified of significant use case or key functionality changes. You should know how and why an outcome impacting you was determined by an automated system, including when the automated system is not the sole input determining the outcome. Automated systems should provide explanations that are technically valid, meaningful and useful to you and to any operators or others who need to understand the system, and calibrated to the level of risk based on the context. Reporting that includes summary information about these automated systems in plain language and assessments of the clarity and quality of the notice and explanations should be made public whenever possible.

Blueprint for an AI Bill of Rights

179

Human Alternatives, Consideration, and Fallback You Should Be Able to Opt out, Where Appropriate, and Have Access to a Person Who Can Quickly Consider and Remedy Problems You Encounter You should be able to opt out from automated systems in favor of a human alternative, where appropriate. Appropriateness should be determined based on reasonable expectations in a given context and with a focus on ensuring broad accessibility and protecting the public from especially harmful impacts. In some cases, a human or other alternative may be required by law. You should have access to timely human consideration and remedy by a fallback and escalation process if an automated system fails, it produces an error, or you would like to appeal or contest its impacts on you. Human consideration and fallback should be accessible, equitable, effective, maintained, accompanied by appropriate operator training, and should not impose an unreasonable burden on the public. Automated systems with an intended use within sensitive domains, including, but not limited to, criminal justice, employment, education, and health, should additionally be tailored to the purpose, provide meaningful access for oversight, include training for any people interacting with the system, and incorporate human consideration for adverse or high-risk decisions. Reporting that includes a description of these human governance processes and assessment of their timeliness, accessibility, outcomes, and effectiveness should be made public whenever possible.

Applying the Blueprint for an AI Bill of Rights While many of the concerns addressed in this framework derive from the use of AI, the technical capabilities and specific definitions of such systems change with the speed of innovation, and the potential harms of their use occur even with less technologically sophisticated tools. Thus, this framework uses a two-part test to determine what systems are in scope. This framework applies to (1) automated systems that (2) have the potential to meaningfully impact the American public’s rights, opportunities, or access to critical resources or services. These rights, opportunities, and access to critical resources of services should be enjoyed equally and be fully protected, regardless of the changing role that automated systems may play in our lives.

180

White House Office of Science and Technology Policy (OSTP)

This framework describes protections that should be applied with respect to all automated systems that have the potential to meaningfully impact individuals' or communities' exercise of:

Rights, Opportunities, or Access Civil rights, civil liberties, and privacy, including freedom of speech, voting, and protections from discrimination, excessive punishment, unlawful surveillance, and violations of privacy and other freedoms in both public and private sector contexts; Equal opportunities, including equitable access to education, housing, credit, employment, and other programs; or, Access to critical resources or services, such as healthcare, financial services, safety, social services, non-deceptive information about goods and services, and government benefits. A list of examples of automated systems for which these principles should be considered is provided in the Appendix. The Technical Companion, which follows, offers supportive guidance for any person or entity that creates, deploys, or oversees automated systems. Considered together, the five principles and associated practices of the Blueprint for an AI Bill of Rights form an overlapping set of backstops against potential harms. This purposefully overlapping framework, when taken as a whole, forms a blueprint to help protect the public from harm. The measures taken to realize the vision set forward in this framework should be proportionate with the extent and nature of the harm, or risk of harm, to people's rights, opportunities, and access.

Relationship to Existing Law and Policy The Blueprint for an AI Bill of Rights is an exercise in envisioning a future where the American public is protected from the potential harms, and can fully enjoy the benefits, of automated systems. It describes principles that can help ensure these protections. Some of these protections are already required by the U.S. Constitution or implemented under existing U.S. laws. For example,

Blueprint for an AI Bill of Rights

181

government surveillance, and data search and seizure are subject to legal requirements and judicial oversight. There are Constitutional requirements for human review of criminal investigative matters and statutory requirements for judicial review. Civil rights laws protect the American people against discrimination.

Applying the Blueprint for an AI Bill of Rights Relationship to Existing Law and Policy There are regulatory safety requirements for medical devices, as well as sector-, population-, or technology-specific privacy and security protections. Ensuring some of the additional protections proposed in this framework would require new laws to be enacted or new policies and practices to be adopted. In some cases, exceptions to the principles described in the Blueprint for an AI Bill of Rights may be necessary to comply with existing law, conform to the practicalities of a specific use case, or balance competing public interests. In particular, law enforcement, and other regulatory contexts may require government actors to protect civil rights, civil liberties, and privacy in a manner consistent with, but using alternate mechanisms to, the specific principles discussed in this framework. The Blueprint for an AI Bill of Rights is meant to assist governments and the private sector in moving principles into practice. The expectations given in the Technical Companion are meant to serve as a blueprint for the development of additional technical standards and practices that should be tailored for particular sectors and contexts. While existing laws informed the development of the Blueprint for an AI Bill of Rights, this framework does not detail those laws beyond providing them as examples, where appropriate, of existing protective measures. This framework instead shares a broad, forward-leaning vision of recommended principles for automated system development and use to inform private and public involvement with these systems where they have the potential to meaningfully impact rights, opportunities, or access. Additionally, this framework does not analyze or take a position on legislative and regulatory proposals in municipal, state, and federal government, or those in other countries. We have seen modest progress in recent years, with some state and local governments responding to these problems with legislation, and some courts

182

White House Office of Science and Technology Policy (OSTP)

extending longstanding statutory protections to new and emerging technologies. There are companies working to incorporate additional protections in their design and use of automated systems, and researchers developing innovative guardrails. Advocates, researchers, and government organizations have proposed principles for the ethical use of AI and other automated systems. These include the Organization for Economic Cooperation and Development’s (OECD’s) 2019 Recommendation on Artificial Intelligence, which includes principles for responsible stewardship of trustworthy AI and which the United States adopted, and Executive Order 13960 on Promoting the Use of Trustworthy Artificial Intelligence in the Federal Government, which sets out principles that govern the federal government’s use of AI. The Blueprint for an AI Bill of Rights is fully consistent with these principles and with the direction in Executive Order 13985 on Advancing Racial Equity and Support for Underserved Communities Through the Federal Government. These principles find kinship in the Fair Information Practice Principles (FIPPs), derived from the 1973 report of an advisory committee to the U.S. Department of Health, Education, and Welfare, Records, Computers, and the Rights of Citizens.3 While there is no single, universal articulation of the FIPPs, these core principles for managing information about individuals have been incorporated into data privacy laws and policies across the globe.4 The Blueprint for an AI Bill of Rights embraces elements of the FIPPs that are particularly relevant to automated systems, without articulating a specific set of FIPPs or scoping applicability or the interests served to a single particular domain, like privacy, civil rights and civil liberties, ethics, or risk management. The Technical Companion builds on this prior work to provide practical next steps to move these principles into practice and promote common approaches that allow technological innovation to flourish while protecting people from harm.

U.S. Dept. of Health, Educ. & Welfare, Report of the Sec’y’s Advisory Comm. on Automated Pers. Data Sys., Records, Computers, and the Rights of Citizens (July 1973). https://www.justice.gov/opcl/docs/rec-com-rights.pdf. 4 See, e.g., Office of Mgmt. & Budget, Exec. Office of the President, Circular A-130, Managing Information as a Strategic Resource, app. II § 3 (July 28, 2016); Org. of Econ. Co-Operation & Dev., Revision of the Recommendation of the Council Concerning Guidelines Governing the Protection of Privacy and Transborder Flows of Personal Data, Annex Part Two (June 20, 2013). https://one.oecd.org/document/C(2013)79/en/pdf. 3

Blueprint for an AI Bill of Rights

183

Definitions Algorithmic Discrimination “Algorithmic discrimination” occurs when automated systems contribute to unjustified different treatment or impacts disfavoring people based on their race, color, ethnicity, sex (including pregnancy, childbirth, and related medical conditions, gender identity, intersex status, and sexual orientation), religion, age, national origin, disability, veteran status, genetic information, or any other classification protected by law. Depending on the specific circumstances, such algorithmic discrimination may violate legal protections. Throughout this framework the term “algorithmic discrimination” takes this meaning (and not a technical understanding of discrimination as distinguishing between items). Automated System An "automated system" is any system, software, or process that uses computation as whole or part of a system to determine outcomes, make or aid decisions, inform policy implementation, collect data or observations, or otherwise interact with individuals and/or communities. Automated systems include, but are not limited to, systems derived from machine learning, statistics, or other data processing or artificial intelligence techniques, and exclude passive computing infrastructure. “Passive computing infrastructure” is any intermediary technology that does not influence or determine the outcome of decision, make or aid in decisions, inform policy implementation, or collect data or observations, including web hosting, domain registration, networking, caching, data storage, or cybersecurity. Throughout this framework, automated systems that are considered in scope are only those that have the potential to meaningfully impact individuals’ or communi-ties’ rights, opportunities, or access. Communities “Communities” include: neighborhoods; social network connections (both online and offline); families (construed broadly); people connected by affinity, identity, or shared traits; and formal organizational ties. This includes Tribes, Clans, Bands, Rancherias, Villages, and other Indigenous communities. AI and other data-driven automated systems most directly collect data on, make inferences about, and may cause harm to individuals. But the overall magnitude of their impacts may be most readily visible at the level of

184

White House Office of Science and Technology Policy (OSTP)

communities. Accordingly, the concept of community is integral to the scope of the Blueprint for an AI Bill of Rights. United States law and policy have long employed approaches for protecting the rights of individuals, but existing frameworks have sometimes struggled to provide protections when effects manifest most clearly at a community level. For these reasons, the Blueprint for an AI Bill of Rights asserts that the harms of automated systems should be evaluated, protected against, and redressed at both the individual and community levels.

Equity “Equity” means the consistent and systematic fair, just, and impartial treatment of all individuals. Systemic, fair, and just treatment must take into account the status of individuals who belong to underserved communities that have been denied such treatment, such as Black, Latino, and Indigenous and Native American persons, Asian Americans and Pacific Islanders and other persons of color; members of religious minorities; women, girls, and nonbinary people; lesbian, gay, bisexual, transgender, queer, and intersex (LGBTQI+) persons; older adults; persons with disabilities; persons who live in rural areas; and persons otherwise adversely affected by persistent poverty or inequality. Rights, Opportunities, or Access “Rights, opportunities, or access” is used to indicate the scoping of this framework. It describes the set of: civil rights, civil liberties, and privacy, including freedom of speech, voting, and protections from discrimination, excessive punishment, unlawful surveillance, and violations of privacy and other freedoms in both public and private sector contexts; equal opportunities, including equitable access to education, housing, credit, employment, and other programs; or, access to critical resources or services, such as healthcare, financial services, safety, social services, non-deceptive information about goods and services, and government benefits. Sensitive Data Data and metadata are sensitive if they pertain to an individual in a sensitive domain (defined below); are generated by technologies used in a sensitive domain; can be used to infer data from a sensitive domain or sensitive data about an individual (such as disability-related data, genomic data, biometric data, behavioral data, geolocation data, data related to interaction with the criminal justice system, relationship history and legal status such as custody

Blueprint for an AI Bill of Rights

185

and divorce information, and home, work, or school environmental data); or have the reasonable potential to be used in ways that are likely to expose individuals to meaningful harm, such as a loss of privacy or financial harm due to identity theft. Data and metadata generated by or about those who are not yet legal adults is also sensitive, even if not related to a sensitive domain. Such data includes, but is not limited to, numerical, text, image, audio, or video data.

Sensitive Domains “Sensitive domains” are those in which activities being conducted can cause material harms, including significant adverse effects on human rights such as autonomy and dignity, as well as civil liberties and civil rights. Domains that have historically been singled out as deserving of enhanced data protections or where such enhanced protections are reasonably expected by the public include, but are not limited to, health, family planning and care, employment, education, criminal justice, and personal finance. In the context of this framework, such domains are considered sensitive whether or not the specifics of a system context would necessitate coverage under existing law, and domains and data that are considered sensitive are understood to change over time based on societal norms and context. Surveillance Technology “Surveillance technology” refers to products or services marketed for or that can be lawfully used to detect, monitor, intercept, collect, exploit, preserve, protect, transmit, and/or retain data, identifying information, or communications concerning individuals or groups. This framework limits its focus to both government and commercial use of surveillance technologies when juxtaposed with real-time or subsequent automated analysis and when such systems have a potential for meaningful impact on individuals’ or communities’ rights, opportunities, or access. Underserved Communities The term “underserved communities” refers to communities that have been systematically denied a full opportunity to participate in aspects of economic, social, and civic life, as exemplified by the list in the preceding definition of “equity.”

186

White House Office of Science and Technology Policy (OSTP)

From Principles to Practice: A Techincal Companion to the Blueprint for an AI Bill of Rights Using This Technical Companion The Blueprint for an AI Bill of Rights is a set of five principles and associated practices to help guide the design, use, and deployment of automated systems to protect the rights of the American public in the age of artificial intelligence. This technical companion considers each principle in the Blueprint for an AI Bill of Rights and provides examples and concrete steps for communities, industry, governments, and others to take in order to build these protections into policy, practice, or the technological design process. Taken together, the technical protections and practices laid out in the Blueprint for an AI Bill of Rights can help guard the American public against many of the potential and actual harms identified by researchers, technologists, advocates, journalists, policymakers, and communities in the United States and around the world. This technical companion is intended to be used as a reference by people across many circumstances – anyone impacted by automated systems, and anyone developing, designing, deploying, evaluating, or making policy to govern the use of an automated system. Each principle is accompanied by three supplemental sections: 1 Why this principle is important: This section provides a brief summary of the problems that the principle seeks to address and protect against, including illustrative examples. 2 What should be expected of automated systems:  The expectations for automated systems are meant to serve as a blueprint for the development of additional technical standards and practices that should be tailored for particular sectors and contexts.  This section outlines practical steps that can be implemented to realize the vision of the Blueprint for an AI Bill of Rights. The expectations laid out often mirror existing practices for technology development, including pre-deployment testing, ongoing monitoring, and governance structures for automated systems, but also go further to address unmet needs for change and offer concrete directions for how those changes can be made.

Blueprint for an AI Bill of Rights

187



Expectations about reporting are intended for the entity developing or using the automated system. The resulting reports can be provided to the public, regulators, auditors, industry standards groups, or others engaged in independent review, and should be made public as much as possible consistent with law, regulation, and policy, and noting that intellectual property, law enforcement, or national security considerations may prevent public release. Where public reports are not possible, the information should be provided to oversight bodies and privacy, civil liberties, or other ethics officers charged with safeguard ing individuals’ rights. These reporting expectations are important for transparency, so the American people can have confidence that their rights, opportunities, and access as well as their expectations about technologies are respected. 3 How these principles can move into practice: This section provides real-life examples of how these guiding principles can become reality, through laws, policies, and practices. It describes practical technical and sociotechnical approaches to protecting rights, opportunities, and access. The examples provided are not critiques or endorsements, but rather are offered as illustrative cases to help provide a concrete vision for actualizing the Blueprint for an AI Bill of Rights. Effectively implementing these processes require the cooperation of and collaboration among industry, civil society, researchers, policymakers, technologists, and the public.

Safe and Effective Systems You Should Be Protected from Unsafe or Ineffective Systems Automated systems should be developed with consultation from diverse communities, stakeholders, and domain experts to identify concerns, risks, and potential impacts of the system. Systems should undergo pre-deployment testing, risk identification and mitigation, and ongoing monitoring that demonstrate they are safe and effective based on their intended use, mitigation of unsafe outcomes including those beyond the intended use, and adherence to domain-specific standards. Outcomes of these protective measures should

188

White House Office of Science and Technology Policy (OSTP)

include the possibility of not deploying the system or removing a system from use. Automated systems should not be designed with an intent or reasonably foreseeable possibility of endangering your safety or the safety of your community. They should be designed to proactively protect you from harms stemming from unintended, yet foreseeable, uses or impacts of automated systems. You should be protected from inappropriate or irrelevant data use in the design, development, and deployment of automated systems, and from the compounded harm of its reuse. Independent evaluation and reporting that confirms that the system is safe and effective, including reporting of steps taken to mitigate potential harms, should be performed and the results made public whenever possible.

Why This Principle Is Important This section provides a brief summary of the problems which the principle seeks to address and protect against, including illustrative examples.

While technologies are being deployed to solve problems across a wide array of issues, our reliance on technology can also lead to its use in situations where it has not yet been proven to work—either at all or within an acceptable range of error. In other cases, technologies do not work as intended or as promised, causing substantial and unjustified harm. Automated systems sometimes rely on data from other systems, including historical data, allowing irrelevant information from past decisions to infect decision-making in unrelated situations. In some cases, technologies are purposefully designed to violate the safety of others, such as technologies designed to facilitate stalking; in other cases, intended or unintended uses lead to unintended harms. Many of the harms resulting from these technologies are preventable, and actions are already being taken to protect the public. Some companies have put in place safeguards that have prevented harm from occurring by ensuring that key development decisions are vetted by an ethics review; others have identified and mitigated harms found through pre-deployment testing and ongoing monitoring processes. Governments at all levels have existing public consultation processes that may be applied when considering the use of new automated systems, and existing product development and testing practices already protect the American public from many potential harms. Still, these kinds of practices are deployed too rarely and unevenly. Expanded, proactive protections could build on these existing practices,

Blueprint for an AI Bill of Rights

189

increase confidence in the use of automated systems, and protect the American public. Innovators deserve clear rules of the road that allow new ideas to flourish, and the American public deserves protections from unsafe outcomes. All can benefit from assurances that automated systems will be designed, tested, and consistently confirmed to work as intended, and that they will be proactively protected from foreseeable unintended harmful outcomes. 







5

A proprietary model was developed to predict the likelihood of sepsis in hospitalized patients and was implemented at hundreds of hospitals around the country. An independent study showed that the model predictions underperformed relative to the designer’s claims while also causing ‘alert fatigue’ by falsely alerting likelihood of sepsis.5 On social media, Black people who quote and criticize racist messages have had their own speech silenced when a platform’s automated moderation system failed to distinguish this “counter speech” (or other critique and journalism) from the original hateful messages to which such speech responded.6 A device originally developed to help people track and find lost items has been used as a tool by stalkers to track victims’ locations in violation of their privacy and safety. The device manufacturer took steps after release to protect people from unwanted tracking by alerting people on their phones when a device is found to be moving with them over time and also by having the device make an occasional noise, but not all phones are able to receive the notification and the devices remain a safety concern due to their misuse.7 An algorithm used to deploy police was found to repeatedly send police to neighborhoods they regularly visit, even if those neighborhoods were not the ones with the highest crime rates. These incorrect crime predictions were the result of a feedback loop

Andrew Wong et al. External validation of a widely implemented proprietary sepsis prediction model in hospitalized patients. JAMA Intern Med. 2021; 181(8):1065-1070. doi:10.1001/jamainternmed.2021.2626. 6 Jessica Guynn. Facebook while black: Users call it getting 'Zucked,' say talking about racism is censored as hate speech. USA Today. Apr. 24, 2019. https://www.usatoday. com/story/news/2019/04/24/facebook-while-black-zucked-users-say-they-get-blockedracism-discussion/2859593002/. 7 See, e.g., Michael Levitt. AirTags are being used to track people and cars. Here's what is being done about it. NPR. Feb. 18, 2022. https://www.npr.org/2022/02/18/1080944193/appleairtags-theft-stalking-privacy-tech; Samantha Cole. Police Records Show Women Are Being Stalked With Apple AirTags Across the Country. Motherboard. Apr. 6, 2022. https://www.vice.com/en/article/y3vj3y/apple-airtags-police-reports-stalking-harassment.

190

White House Office of Science and Technology Policy (OSTP)





generated from the reuse of data from previous arrests and algorithm predictions.8 AI-enabled “nudification” technology that creates images where people appear to be nude—including apps that enable non-technical users to create or alter images of individuals without their consent— has proliferated at an alarming rate. Such technology is becoming a common form of image-based abuse that disproportionately impacts women. As these tools become more sophisticated, they are producing altered images that are increasingly realistic and are difficult for both humans and AI to detect as inauthentic. Regardless of authenticity, the experience of harm to victims of non-consensual intimate images can be devastatingly real—affecting their personal and professional lives, and impacting their mental and physical health.9 A company installed AI-powered cameras in its delivery vans in order to evaluate the road safety habits of its drivers, but the system incorrectly penalized drivers when other cars cut them off or when other events beyond their control took place on the road. As a result, drivers were incorrectly ineligible to receive a bonus.10

What Should Be Expected of Automated Systems The expectations for automated systems are meant to serve as a blueprint for the development of additional technical standards and practices that are tailored for particular sectors and contexts.

8

Kristian Lum and William Isaac. To Predict and Serve? Significance. Vol. 13, No. 5, p. 14-19. Oct. 7, 2016. https://rss.onlinelibrary.wiley.com/doi/full/10.1111/j.1740-9713.2016. 00960.x; Aaron Sankin, Dhruv Mehrotra, Surya Mattu, and Annie Gilbertson. Crime Prediction Software Promised to Be Free of Biases. New Data Shows It Perpetuates Them. The Markup and Gizmodo. Dec. 2, 2021. https://themarkup.org/predictionbias/2021/12/02/crime-prediction-software-promised-to-be-free-of-biases-new-data-showsit-perpetuates-them. 9 Samantha Cole. This Horrifying App Undresses a Photo of Any Woman With a Single Click. Motherboard. June 26, 2019. https://www.vice.com/en/article/kzm59x/deepnude-appcreates-fake-nudes-of-any-woman. 10 Lauren Kaori Gurley. Amazon’s AI Cameras Are Punishing Drivers for Mistakes They Didn’t Make. Motherboard. Sep. 20, 2021. https://www.vice.com/en/article/88npjv/amazons-aicameras-are-punishing-drivers-for-mistakes-they-didnt-make.

Blueprint for an AI Bill of Rights

191

In order to ensure that an automated system is safe and effective, it should include safeguards to protect the public from harm in a proactive and ongoing manner; avoid use of data inappropriate for or irrelevant to the task at hand, including reuse that could cause compounded harm; and demonstrate the safety and effectiveness of the system. These expectations are explained below.

Protect the Public from Harm in a Proactive and Ongoing Manner Consultation The public should be consulted in the design, implementation, deployment, acquisition, and maintenance phases of automated system development, with emphasis on early-stage consultation before a system is introduced or a large change implemented. This consultation should directly engage diverse impacted communities to consider concerns and risks that may be unique to those communities, or disproportionately prevalent or severe for them. The extent of this engagement and the form of outreach to relevant stakeholders may differ depending on the specific automated system and development phase, but should include subject matter, sector-specific, and context-specific experts as well as experts on potential impacts such as civil rights, civil liberties, and privacy experts. For private sector applications, consultations before product launch may need to be confidential. Government applications, particularly law enforcement applications or applications that raise national security considerations, may require confidential or limited engagement based on system sensitivities and preexisting oversight laws and structures. Concerns raised in this consultation should be documented, and the automated system developers were proposing to create, use, or deploy should be reconsidered based on this feedback. Testing Systems should undergo extensive testing before deployment. This testing should follow domain-specific best practices, when available, for ensuring the technology will work in its real-world context. Such testing should take into account both the specific technology used and the roles of any human operators or reviewers who impact system outcomes or effectiveness; testing should include both automated systems testing and human-led (manual) testing. Testing conditions should mirror as closely as possible the conditions in which the system will be deployed, and new testing may be required for each deployment to account for material differences in conditions from one

192

White House Office of Science and Technology Policy (OSTP)

deployment to another. Following testing, system performance should be compared with the in-place, potentially human-driven, status quo procedures, with existing human performance considered as a performance baseline for the algorithm to meet pre-deployment, and as a lifecycle minimum performance standard. Decision possibilities resulting from performance testing should include the possibility of not deploying the system.

Risk Identification and Mitigation Before deployment, and in a proactive and ongoing manner, potential risks of the automated system should be identified and mitigated. Identified risks should focus on the potential for meaningful impact on people’s rights, opportunities, or access and include those to impacted communities that may not be direct users of the automated system, risks resulting from purposeful misuse of the system, and other concerns identified via the consultation process. Assessment and, where possible, measurement of the impact of risks should be included and balanced such that high impact risks receive attention and mitigation proportionate with those impacts. Automated systems with the intended purpose of violating the safety of others should not be developed or used; systems with such safety violations as identified unintended consequences should not be used until the risk can be mitigated. Ongoing risk mitigation may necessitate rollback or significant modification to a launched automated system. Ongoing Monitoring Automated systems should have ongoing monitoring procedures, including recalibration procedures, in place to ensure that their performance does not fall below an acceptable level over time, based on changing real-world conditions or deployment contexts, post-deployment modification, or unexpected conditions. This ongoing monitoring should include continuous evaluation of performance metrics and harm assessments, updates of any systems, and retraining of any machine learning models as necessary, as well as ensuring that fallback mechanisms are in place to allow reversion to a previously working system. Monitoring should take into account the performance of both technical system components (the algorithm as well as any hardware components, data inputs, etc.) and human operators. It should include mechanisms for testing the actual accuracy of any predictions or recommendations generated by a system, not just a human operator’s determination of their accuracy. Ongoing monitoring procedures should include manual, human-led monitoring as a check in the event there are

Blueprint for an AI Bill of Rights

193

shortcomings in automated monitoring systems. These monitoring procedures should be in place for the lifespan of the deployed automated system.

Clear Organizational Oversight Entities responsible for the development or use of automated systems should lay out clear governance structures and procedures. This includes clearlystated governance procedures before deploying the system, as well as responsibility of specific individuals or entities to oversee ongoing assessment and mitigation. Organizational stakeholders including those with oversight of the business process or operation being automated, as well as other organizational divisions that may be affected due to the use of the system, should be involved in establishing governance procedures. Responsibility should rest high enough in the organization that decisions about resources, mitigation, incident response, and potential rollback can be made promptly, with sufficient weight given to risk mitigation objectives against competing concerns. Those holding this responsibility should be made aware of any use cases with the potential for meaningful impact on people’s rights, opportunities, or access as determined based on risk identification procedures. In some cases, it may be appropriate for an independent ethics review to be conducted before deployment. Avoid Inappropriate, Low-Quality, or Irrelevant Data Use and the Compounded Harm of Its Reuse Relevant and High-Quality Data Data used as part of any automated system’s creation, evaluation, or deployment should be relevant, of high quality, and tailored to the task at hand. Relevancy should be established based on research-backed demonstration of the causal influence of the data to the specific use case or justified more generally based on a reasonable expectation of usefulness in the domain and/or for the system design or ongoing development. Relevance of data should not be established solely by appealing to its historical connection to the outcome. High quality and tailored data should be representative of the task at hand and errors from data entry or other sources should be measured and limited. Any data used as the target of a prediction process should receive particular attention to the quality and validity of the predicted outcome or label to ensure the goal of the automated system is appropriately identified and measured. Additionally, justification should be documented for each data attribute and

194

White House Office of Science and Technology Policy (OSTP)

source to explain why it is appropriate to use that data to inform the results of the automated system and why such use will not violate any applicable laws. In cases of high-dimensional and/or derived attributes, such justifications can be provided as overall descriptions of the attribute generation process and appropriateness.

Derived Data Sources Tracked and Reviewed Carefully Data that is derived from other data through the use of algorithms, such as data derived or inferred from prior model outputs, should be identified and tracked, e.g., via a specialized type in a data schema. Derived data should be viewed as potentially high-risk inputs that may lead to feedback loops, compounded harm, or inaccurate results. Such sources should be carefully validated against the risk of collateral consequences. Data Reuse Limits in Sensitive Domains Data reuse, and especially data reuse in a new context, can result in the spreading and scaling of harms. Data from some domains, including criminal justice data and data indicating adverse outcomes in domains such as finance, employment, and housing, is especially sensitive, and in some cases its reuse is limited by law. Accordingly, such data should be subject to extra oversight to ensure safety and efficacy. Data reuse of sensitive domain data in other contexts (e.g., criminal data reuse for civil legal matters or private sector use) should only occur where use of such data is legally authorized and, after examination, has benefits for those impacted by the system that outweigh identified risks and, as appropriate, reasonable measures have been implemented to mitigate the identified risks. Such data should be clearly labeled to identify contexts for limited reuse based on sensitivity. Where possible, aggregated datasets may be useful for replacing individual-level sensitive data. Demonstrate the Safety and Effectiveness of the System Independent Evaluation Automated systems should be designed to allow for independent evaluation (e.g., via application programming interfaces). Independent evaluators, such as researchers, journalists, ethics review boards, inspectors general, and thirdparty auditors, should be given access to the system and samples of associated data, in a manner consistent with privacy, security, law, or regulation

Blueprint for an AI Bill of Rights

195

(including, e.g., intellectual property law), in order to perform such evaluations. Mechanisms should be included to ensure that system access for evaluation is: provided in a timely manner to the deployment-ready version of the system; trusted to provide genuine, unfiltered access to the full system; and truly independent such that evaluator access cannot be revoked without reasonable and verified justification.

Reporting11 Entities responsible for the development or use of automated systems should provide regularly-updated reports that include: an overview of the system, including how it is embedded in the organization’s business processes or other activities, system goals, any human-run procedures that form a part of the system, and specific performance expectations; a description of any data used to train machine learning models or for other purposes, including how data sources were processed and interpreted, a summary of what data might be missing, incomplete, or erroneous, and data relevancy justifications; the results of public consultation such as concerns raised and any decisions made due to these concerns; risk identification and management assessments and any steps taken to mitigate potential harms; the results of performance testing including, but not limited to, accuracy, differential demographic impact, resulting error rates (overall and per demographic group), and comparisons to previously deployed systems; ongoing monitoring procedures and regular performance testing reports, including monitoring frequency, results, and actions taken; and the procedures for and results from independent evaluations. Reporting should be provided in a plain language and machinereadable manner. How These Principles Can Move into Practice Real-life examples of how these principles can become reality, through laws, policies, and practical technical and sociotechnical approaches to protecting rights, opportunities, and access. 11

Expectations about reporting are intended for the entity developing or using the automated system. The resulting reports can be provided to the public, regulators, auditors, industry standards groups, or others engaged in independent review, and should be made public as much as possible consistent with law, regulation, and policy, and noting that intellectual property or law enforcement considerations may prevent public release. These reporting expectations are important for transparency, so the American people can have confidence that their rights, opportunities, and access as well as their expectations around technologies are respected.

196

White House Office of Science and Technology Policy (OSTP)

Executive Order 13960 on Promoting the Use of Trustworthy Artificial Intelligence in the Federal Government Requires That Certain Federal Agencies Adhere to Nine Principles When Designing, Developing, Acquiring, or Using AI for Purposes Other Than National Security or Defense These principles—while taking into account the sensitive law enforcement and other contexts in which the federal government may use AI, as opposed to private sector use of AI—require that AI is: (a) lawful and respectful of our Nation’s values; (b) purposeful and performance-driven; (c) accurate, reliable, and effective; (d) safe, secure, and resilient; (e) understandable; (f) responsible and traceable; (g) regularly monitored; (h) transparent; and, (i) accountable. The Blueprint for an AI Bill of Rights is consistent with the Executive Order. Affected agencies across the federal government have released AI use case inventories12 and are implementing plans to bring those AI systems into compliance with the Executive Order or retire them. The Law and Policy Landscape for Motor Vehicles Shows That Strong Safety Regulations—and Measures to Address Harms When They Occur—Can Enhance Innovation in the Context of Complex Technologies Cars, like automated digital systems, comprise a complex collection of components. The National Highway Traffic Safety Administration,13 through its rigorous standards and independent evaluation, helps make sure vehicles on our roads are safe without limiting manufacturers’ ability to innovate.14 At the same time, rules of the road are implemented locally to impose contextually appropriate requirements on drivers, such as slowing down near schools or playgrounds.15

12

National Artificial Intelligence Initiative Office. Agency Inventories of AI Use Cases. Accessed Sept. 8, 2022. https://www.ai.gov/ai-use-case-inventories/. 13 National Highway Traffic Safety Administration. https://www.nhtsa.gov/. 14 See, e.g., Charles Pruitt. People Doing What They Do Best: The Professional Engineers and NHTSA. Public Administration Review. Vol. 39, No. 4. Jul.-Aug., 1979. https://www.jstor.org/stable/976213?seq=1. 15 The US Department of Transportation has publicly described the health and other benefits of these “traffic calming” measures. See, e.g.: U.S. Department of Transportation. Traffic Calming to Slow Vehicle Speeds. Accessed Apr. 17, 2022. https://www.transportation. gov/mission/health/Traffic-Calming-to-Slow-Vehicle-Speeds.

Blueprint for an AI Bill of Rights

197

From Large Companies to Start-Ups, Industry Is Providing Innovative Solutions That Allow Organizations to Mitigate Risks to the Safety and Efficacy of AI Systems, Both before Deployment and through Monitoring over Time16 These innovative solutions include risk assessments, auditing mechanisms, assessment of organizational procedures, dashboards to allow for ongoing monitoring, documentation procedures specific to model assessments, and many other strategies that aim to mitigate risks posed by the use of AI to companies’ reputation, legal responsibilities, and other product safety and effectiveness concerns. The Office of Management and Budget (OMB) Has Called for an Expansion of Opportunities for Meaningful Stakeholder Engagement in the Design of Programs and Services OMB also points to numerous examples of effective and proactive stakeholder engagement, including the Community-Based Participatory Research Program developed by the National Institutes of Health and the participatory technology assessments developed by the National Oceanic and Atmospheric Administration.17 The National Institute of Standards and Technology (NIST) Is Developing a Risk Management Framework to Better Manage Risks Posed to Individuals, Organizations, and Society by AI18 The NIST AI Risk Management Framework, as mandated by Congress, is intended for voluntary use to help incorporate trustworthiness considerations into the design, development, use, and evaluation of AI products, services, and systems. The NIST framework is being developed through a consensusKaren Hao. Worried about your firm’s AI ethics? These startups are here to help. A growing ecosystem of “responsible AI” ventures promise to help organizations monitor and fix their AI models. MIT Technology Review. Jan 15., 2021. https://www.technologyreview. com/2021/01/15/1016183/ai-ethics-startups/; Disha Sinha. Top Progressive Companies Building Ethical AI to Look Out for in 2021. Analytics Insight. June 30, 2021. https:// www.analyticsinsight.net/top-progressive-companies-building-ethical-ai-to-look-out-for-in2021/https://www.technologyreview.com/2021/01/15/1016183/ai-ethics-startups/; Disha Sinha. Top Progressive Companies Building Ethical AI to Look Out for in 2021. Analytics Insight. June 30, 2021. 17 Office of Management and Budget. Study to Identify Methods to Assess Equity: Report to the President. Aug. 2021. https://www.whitehouse.gov/wp-content/uploads/2021/08/OMBReport-on-E013985-Implementation_508-Compliant-Secure-v1.1.pdf. 18 National Institute of Standards and Technology. AI Risk Management Framework. Accessed May 23, 2022. https://www.nist.gov/itl/ai-risk-management-framework. 16

198

White House Office of Science and Technology Policy (OSTP)

driven, open, transparent, and collaborative process that includes workshops and other opportunities to provide input. The NIST framework aims to foster the development of innovative approaches to address characteristics of trustworthiness including accuracy, explainability and interpretability, reliability, privacy, robustness, safety, security (resilience), and mitigation of unintended and/or harmful bias, as well as of harmful uses. The NIST framework will consider and encompass principles such as transparency, accountability, and fairness during pre-design, design and development, deployment, use, and testing and evaluation of AI technologies and systems. It is expected to be released in the winter of 2022-23.

Some U.S Government Agencies Have Developed Specific Frameworks for Ethical Use of AI Systems The Department of Energy (DOE) has activated the AI Advancement Council that oversees coordination and advises on implementation of the DOE AI Strategy and addresses issues and/or escalations on the ethical use and development of AI systems.19 The Department of Defense has adopted Artificial Intelligence Ethical Principles, and tenets for Responsible Artificial Intelligence specifically tailored to its national security and defense activities.20 Similarly, the U.S. Intelligence Community (IC) has developed the Principles of Artificial Intelligence Ethics for the Intelligence Community to guide personnel on whether and how to develop and use AI in furtherance of the IC's mission, as well as an AI Ethics Framework to help implement these principles.21

19

20

21

U.S. Department of Energy. U.S. Department of Energy Establishes Artificial Intelligence Advancement Council. U.S. Department of Energy Artificial Intelligence and Technology Office. April 18, 2022. https:// www.energy.gov/ai/articles/us-department-energyestablishes-artificial-intelligence-advancement-council. Department of Defense. U.S Department of Defense Responsible Artificial Intelligence Strategy and Implementation Pathway. Jun. 2022. https://media.defense.gov/2022/ Jun/22/2003022604/-1/-1/0/Department-of-Defense-Responsible-Artificial-IntelligenceStrategy-and-Implementation-Pathway.PDF. Director of National Intelligence. Principles of Artificial Intelligence Ethics for the Intelligence Community. https://www.dni.gov/index.php/features/2763-principles-ofartificial-intelligence-ethics-for-the-intelligence-community.

Blueprint for an AI Bill of Rights

199

The National Science Foundation (NSF) Funds Extensive Research to Help Foster the Development of Automated Systems That Adhere to and Advance Their Safety, Security and Effectiveness Multiple NSF programs support research that directly addresses many of these principles: the National AI Research Institutes22 support research on all aspects of safe, trustworthy, fair, and explainable AI algorithms and systems; the Cyber Physical Systems23 program supports research on developing safe autonomous and cyber physical systems with AI components; the Secure and Trustworthy Cyberspace24 program supports research on cybersecurity and privacy enhancing technologies in automated systems; the Formal Methods in the Field25 program supports research on rigorous formal verification and analysis of automated systems and machine learning, and the Designing Accountable Software Systems26 program supports research on rigorous and reproducible methodologies for developing software systems with legal and regulatory compliance in mind. Some State Legislatures Have Placed Strong Transparency and Validity Requirements on the Use of Pretrial Risk Assessments The use of algorithmic pretrial risk assessments has been a cause of concern for civil rights groups.27 Idaho Code Section 19-1910, enacted in 2019,28 requires that any pretrial risk assessment, before use in the state, first be "shown to be free of bias against any class of individuals protected from discrimination by state or federal law," that any locality using a pretrial risk assessment must first formally validate the claim of its being free of bias, that 22

National Science Foundation. National Artificial Intelligence Research Institutes. Accessed Sept. 12, 2022. https://beta.nsf.gov/funding/opportunities/national-artificial-intelligenceresearch-institutes. 23 National Science Foundation. Cyber-Physical Systems. Accessed Sept. 12, 2022. https://beta.nsf.gov/ funding/opportunities/cyber-physical-systems-cps. 24 National Science Foundation. Secure and Trustworthy Cyberspace. Accessed Sept. 12, 2022. https:// beta.nsf.gov/funding/opportunities/secure-and-trustworthy-cyberspace-satc. 25 National Science Foundation. Formal Methods in the Field. Accessed Sept. 12, 2022. https:// beta.nsf.gov/funding/opportunities/formal-methods-field-fmitf. 26 National Science Foundation. Designing Accountable Software Systems. Accessed Sept. 12, 2022. https://beta.nsf.gov/funding/opportunities/designing-accountable-software-systemsdass. 27 The Leadership Conference Education Fund. The Use of Pretrial “Risk Assessment” Instruments: A Shared Statement of Civil Rights Concerns. Jul. 30, 2018. http://civilrightsdocs.info/pdf/criminal-justice/Pretrial-Risk-Assessment-Short.pdf; https://civilrights.org/edfund/pretrial-risk-assessments/. 28 Idaho Legislature. House Bill 118. Jul. 1, 2019. https://legislature.idaho.gov/sessioninfo/2019/ legislation/H0118/.

200

White House Office of Science and Technology Policy (OSTP)

"all documents, records, and information used to build or validate the risk assessment shall be open to public inspection," and that assertions of trade secrets cannot be used "to quash discovery in a criminal matter by a party to a criminal case."

Algorithmic Discrimination Protections You Should Not Face Discrimination by Algorithms and Systems Should Be Used and Designed in an Equitable Way Algorithmic discrimination occurs when automated systems contribute to unjustified different treatment or impacts disfavoring people based on their race, color, ethnicity, sex (including pregnancy, childbirth, and related medical conditions, gender identity, intersex status, and sexual orientation), religion, age, national origin, disability, veteran status, genetic infor-mation, or any other classification protected by law. Depending on the specific circumstances, such algorithmic discrimination may violate legal protections. Designers, developers, and deployers of automated systems should take proactive and continuous measures to protect individuals and communities from algorithmic discrimination and to use and design systems in an equitable way. This protection should include proactive equity assessments as part of the system design, use of representative data and protection against proxies for demographic features, ensuring accessibility for people with disabilities in design and development, pre-deployment and ongoing disparity testing and mitigation, and clear organizational oversight. Independent evaluation and plain language reporting in the form of an algorithmic impact assessment, including disparity testing results and mitigation information, should be performed and made public whenever possible to confirm these protections.

Why This Principle Is Important This section provides a brief summary of the problems which the principle seeks to address and protect against, including illustrative examples.

Blueprint for an AI Bill of Rights

201

There is extensive evidence showing that automated systems can produce inequitable outcomes and amplify existing inequity.29 Data that fails to account for existing systemic biases in American society can result in a range of consequences. For example, facial recognition technology that can contribute to wrongful and discriminatory arrests,30 hiring algorithms that inform discriminatory decisions, and healthcare algorithms that discount the severity of certain diseases in Black Americans. Instances of discriminatory practices built into and resulting from AI and other automated systems exist across many industries, areas, and contexts. While automated systems have the capacity to drive extraordinary advances and innovations, algorithmic discrimination protections should be built into their design, deployment, and ongoing use. Many companies, non-profits, and federal government agencies are already taking steps to ensure the public is protected from algorithmic discrimination. Some companies have instituted bias testing as part of their product quality assessment and launch procedures, and in some cases this testing has led products to be changed or not launched, preventing harm to the public. Federal government agencies have been developing standards and guidance for the use of automated systems in order to help prevent bias. Nonprofits and companies have developed best practices for audits and impact assessments to help identify potential algorithmic discrimination and provide transparency to the public in the mitigation of such biases. But there is much more work to do to protect the public from algorithmic discrimination to use and design automated systems in an equitable way. The guardrails protecting the public from discrimination in their daily lives should include their digital lives and impacts—basic safeguards against abuse, bias, and discrimination to ensure that all people are treated fairly when automated 29

30

See, e.g., Executive Office of the President. Big Data: A Report on Algorithmic Systems, Opportunity, and Civil Rights. May, 2016. https://obamawhitehouse.archives.gov/ sites/default/files/microsites/ostp/2016_0504_data_discrimination.pdf; Cathy O’Neil. Weapons of Math Destruction. Penguin Books. 2017. https://en.wikipedia. org/wiki/Weapons_of_Math_Destruction; Ruha Benjamin. Race After Technology: Abolitionist Tools for the New Jim Code. Polity. 2019. https://www.ruhabenjamin. com/race-after-technology. See, e.g., Kashmir Hill. Another Arrest, and Jail Time, Due to a Bad Facial Recognition Match: A New Jersey man was accused of shoplifting and trying to hit an officer with a car. He is the third known Black man to be wrongfully arrested based on face recognition. New York Times. Dec. 29, 2020, updated Jan. 6, 2021. https://www.nytimes.com/ 2020/12/29/technology/facial-recognition-misidentify-jail.html; Khari Johnson. How Wrongful Arrests Based on AI Derailed 3 Men's Lives. Wired. Mar. 7, 2022. https:// www.wired.com/story/wrongful-arrests-ai-derailed-3-mens-lives/.

202

White House Office of Science and Technology Policy (OSTP)

systems are used. This includes all dimensions of their lives, from hiring to loan approvals, from medical treatment and payment to encounters with the criminal justice system. Ensuring equity should also go beyond existing guardrails to consider the holistic impact that automated systems make on underserved communities and to institute proactive protections that support these communities. 





31

An automated system using nontraditional factors such as educational attainment and employment history as part of its loan underwriting and pricing model was found to be much more likely to charge an applicant who attended a Historically Black College or University (HBCU) higher loan prices for refinancing a student loan than an applicant who did not attend an HBCU. This was found to be true even when controlling for other credit-related factors.31 A hiring tool that learned the features of a company's employees (predominantly men) rejected women applicants for spurious and discriminatory reasons; resumes with the word “women’s,” such as “women’s chess club captain,” were penalized in the candidate ranking.32 A predictive model marketed as being able to predict whether students are likely to drop out of school was used by more than 500 universities across the country. The model was found to use race directly as a predictor, and also shown to have large disparities by race; Black students were as many as four times as likely as their otherwise similar white peers to be deemed at high risk of dropping out. These risk scores are used by advisors to guide students towards or away from majors, and some worry that they are being used to guide Black students away from math and science subjects.33

Student Borrower Protection Center. Educational Redlining. Student Borrower Protection Center Report. Feb. 2020. https://protectborrowers.org/wp-content/uploads/2020/02/ Education-Redlining-Report.pdf. 32 Jeffrey Dastin. Amazon scraps secret AI recruiting tool that showed bias against women. Reuters. Oct. 10, 2018. https://www.reuters.com/article/us-amazon-com-jobs-automationinsight/amazon-scraps-secret-ai-recruiting-tool-that-showed-bias-against-womenidUSKCN1MK08G. 33 Todd Feathers. Major Universities Are Using Race as a “High Impact Predictor” of Student Success: Students, professors, and education experts worry that that’s pushing Black students in particular out of math and science. The Markup. Mar. 2, 2021. https://themarkup.org/machine-learning/2021/03/02/major-universities-are-using-race-as-ahigh-impact-predictor-of-student-success.

Blueprint for an AI Bill of Rights







34

203

A risk assessment tool designed to predict the risk of recidivism for individuals in federal custody showed evidence of disparity in prediction. The tool overpredicts the risk of recidivism for some groups of color on the general recidivism tools, and underpredicts the risk of recidivism for some groups of color on some of the violent recidivism tools. The Department of Justice is working to reduce these disparities and has publicly released a report detailing its review of the tool.34 An automated sentiment analyzer, a tool often used by technology platforms to determine whether a statement posted online expresses a positive or negative sentiment, was found to be biased against Jews and gay people. For example, the analyzer marked the statement “I’m a Jew” as representing a negative sentiment, while “I’m a Christian” was identified as expressing a positive sentiment.35 This could lead to the preemptive blocking of social media comments such as: “I’m gay.” A related company with this bias concern has made their data public to encourage researchers to help address the issue36 and has released reports identifying and measuring this problem as well as detailing attempts to address it.37 Searches for “Black girls,” “Asian girls,” or “Latina girls” return predominantly38 sexualized content, rather than role models, toys, or

Carrie Johnson. Flaws plague a tool meant to help low-risk federal prisoners win early release. NPR. Jan. 26, 2022. https://www.npr.org/2022/01/26/1075509175/flaws-plague-a-toolmeant-to-help-low-risk-federal-prisoners-win-early-release.; Carrie Johnson. Justice Department works to curb racial bias in deciding who's released from prison. NPR. Apr. 19, 2022. https://www.npr.org/2022/04/19/1093538706/justice-department-works-to-curbracial-bias-in-deciding-whos-released-from-pris; National Institute of Justice. 2021 Review and Revalidation of the First Step Act Risk Assessment Tool. National Institute of Justice NCJ 303859. Dec., 2021. https://www.ojp.gov/ pdffiles1/nij/303859.pdf. 35 Andrew Thompson. Google’s Sentiment Analyzer Thinks Being Gay Is Bad. Vice. Oct. 25, 2017. https:// www.vice.com/en/article/j5jmj8/google-artificial-intelligence-bias. 36 Kaggle. Jigsaw Unintended Bias in Toxicity Classification: Detect toxicity across a diverse range of conversations. 2019. https://www.kaggle.com/c/jigsaw-unintended-bias-in-toxicityclassification. 37 Lucas Dixon, John Li, Jeffrey Sorensen, Nithum Thain, and Lucy Vasserman. Measuring and Mitigating Unintended Bias in Text Classification. Proceedings of AAAI/ACM Conference on AI, Ethics, and Society. Feb. 2-3, 2018. https://dl.acm.org/doi/pdf/10.1145/ 3278721.3278729. 38 Paresh Dave. Google cuts racy results by 30% for searches like 'Latina teenager'. Reuters. Mar. 30, 2022. https://www.reuters.com/technology/google-cuts-racy-results-by-30searches-like-latina-teenager-2022-03-30/.

204

White House Office of Science and Technology Policy (OSTP)







39

activities.39 Some search engines have been working to reduce the prevalence of these results, but the problem remains.40 Advertisement delivery systems that predict who is most likely to click on a job advertisement end up delivering ads in ways that reinforce racial and gender stereotypes, such as overwhelmingly directing supermarket cashier ads to women and jobs with taxi companies to primarily Black people.41 Body scanners, used by TSA at airport checkpoints, require the operator to select a “male” or “female” scanning setting based on the passenger’s sex, but the setting is chosen based on the operator’s perception of the passenger’s gender identity. These scanners are more likely to flag transgender travelers as requiring extra screening done by a person. Transgender travelers have described degrading experiences associated with these extra screenings.42 TSA has recently announced plans to implement a gender-neutral algorithm43 while simultaneously enhancing the security effectiveness capabilities of the existing technology. The National Disabled Law Students Association expressed concerns that individuals with disabilities were more likely to be flagged as potentially suspicious by remote proctoring AI systems because of their disability-specific access needs such as needing longer breaks or using screen readers or dictation software.44

Safiya Umoja Noble. Algorithms of Oppression: How Search Engines Reinforce Racism. NYU Press. Feb. 2018. https://nyupress.org/9781479837243/algorithms-of-oppression/. 40 Paresh Dave. Google cuts racy results by 30% for searches like 'Latina teenager'. Reuters. Mar. 30, 2022. https://www.reuters.com/technology/google-cuts-racy-results-by-30searches-like-latina-teenager-2022-03-30/. 41 Miranda Bogen. All the Ways Hiring Algorithms Can Introduce Bias. Harvard Business Review. May 6, 2019. https://hbr.org/2019/05/all-the-ways-hiring-algorithms-can-introducebias. 42 Arli Christian. Four Ways the TSA Is Making Flying Easier for Transgender People. American Civil Liberties Union. Apr. 5, 2022. https://www.aclu.org/news/lgbtq-rights/four-ways-thetsa-is-making-flying-easier-for-transgender-people. 43 U.S. Transportation Security Administration. Transgender/ Non Binary/Gender Nonconforming Passengers. TSA. Accessed Apr. 21, 2022. https://www.tsa.gov/ transgender-passengers. 44 See, e.g., National Disabled Law Students Association. Report on Concerns Regarding Online Administration of Bar Exams. Jul. 29, 2020. https://ndlsa.org/wp-content/uploads/2020/08/ NDLSA_Online-Exam-Concerns-Report1.pdf; Lydia X. Z. Brown. How Automated Test Proctoring Software Discriminates Against Disabled Students. Center for Democracy and Technology. Nov. 16, 2020. https://cdt.org/insights/how-automated-test-proctoring-softwarediscriminates-against-disabled-students/.

Blueprint for an AI Bill of Rights



205

An algorithm designed to identify patients with high needs for healthcare systematically assigned lower scores (indicating that they were not as high need) to Black patients than to those of white patients, even when those patients had similar numbers of chronic conditions and other markers of health.45 In addition, healthcare clinical algorithms that are used by physicians to guide clinical decisions may include sociodemographic variables that adjust or “correct” the algorithm’s output on the basis of a patient’s race or ethnicity, which can lead to race-based health inequities.46

What Should Be Expected of Automated Systems The expectations for automated systems are meant to serve as a blueprint for the development of additional technical standards and practices that are tailored for particular sectors and contexts.

Any automated system should be tested to help ensure it is free from algorithmic discrimination before it can be sold or used. Protection against algorithmic discrimination should include designing to ensure equity, broadly construed. Some algorithmic discrimination is already prohibited under existing anti-discrimination law. The expectations set out below describe proactive technical and policy steps that can be taken to not only reinforce those legal protections but extend beyond them to ensure equity for underserved communities47 even in circumstances where a specific legal protection may not be clearly established. These protections should be instituted throughout the design, development, and deployment process and are described below roughly in the order in which they would be instituted.

45

Ziad Obermeyer, et al., Dissecting racial bias in an algorithm used to manage the health of populations, 366 Science (2019), https://www.science.org/doi/10.1126/science.aax2342. 46 Darshali A. Vyas et al., Hidden in Plain Sight – Reconsidering the Use of Race Correction in Clinical Algorithms, 383 N. Engl. J. Med.874, 876-78 (Aug. 27, 2020), https://www.nejm.org/doi/full/10.1056/ NEJMms2004740. 47 The definitions of 'equity' and 'underserved communities' can be found in the Definitions section of this framework as well as in Section 2 of The Executive Order on Advancing Racial Equity and Support for Underserved Communities Through the Federal Government. https://www.whitehouse.gov/briefing-room/presidential-actions/2021/01/20/ executive-order-advancing-racial-equity-and-support-for-underserved-communitiesthrough-the-federal-government/.

206

White House Office of Science and Technology Policy (OSTP)

Protect the Public from Algorithmic Discrimination in a Proactive and Ongoing Manner Proactive Assessment of Equity in Design Those responsible for the development, use, or oversight of automated systems should conduct proactive equity assessments in the design phase of the technology research and development or during its acquisition to review potential input data, associated historical context, accessibility for people with disabilities, and societal goals to identify potential discrimination and effects on equity resulting from the introduction of the technology. The assessed groups should be as inclusive as possible of the underserved communities mentioned in the equity definition: Black, Latino, and Indigenous and Native American persons, Asian Americans and Pacific Islanders and other persons of color; members of religious minorities; women, girls, and non-binary people; lesbian, gay, bisexual, transgender, queer, and intersex (LGBTQI+) persons; older adults; persons with disabilities; persons who live in rural areas; and persons otherwise adversely affected by persistent poverty or inequality. Assessment could include both qualitative and quantitative evaluations of the system. This equity assessment should also be considered a core part of the goals of the consultation conducted as part of the safety and efficacy review. Representative and Robust Data Any data used as part of system development or assessment should be representative of local communities based on the planned deployment setting and should be reviewed for bias based on the historical and societal context of the data. Such data should be sufficiently robust to identify and help to mitigate biases and potential harms. Guarding against Proxies Directly using demographic information in the design, development, or deployment of an automated system (for purposes other than evaluating a system for discrimination or using a system to counter discrimination) runs a high risk of leading to algorithmic discrimination and should be avoided. In many cases, attributes that are highly correlated with demographic features, known as proxies, can contribute to algorithmic discrimination. In cases where use of the demographic features themselves would lead to illegal algorithmic discrimination, reliance on such proxies in decision-making (such as that facilitated by an algorithm) may also be prohibited by law. Proactive testing should be performed to identify proxies by testing for correlation between

Blueprint for an AI Bill of Rights

207

demographic information and attributes in any data used as part of system design, development, or use. If a proxy is identified, designers, developers, and deployers should remove the proxy; if needed, it may be possible to identify alternative attributes that can be used instead. At a minimum, organizations should ensure a proxy feature is not given undue weight and should monitor the system closely for any resulting algorithmic discrimination.

Ensuring Accessibility during Design, Development, and Deployment Systems should be designed, developed, and deployed by organizations in ways that ensure accessibility to people with disabilities. This should include consideration of a wide variety of disabilities, adherence to relevant accessibility standards, and user experience research both before and after deployment to identify and address any accessibility barriers to the use or effectiveness of the automated system. Disparity Assessment Automated systems should be tested using a broad set of measures to assess whether the system components, both in pre-deployment testing and incontext deployment, produce disparities. The demographics of the assessed groups should be as inclusive as possible of race, color, ethnicity, sex (including pregnancy, childbirth, and related medical conditions, gender identity, intersex status, and sexual orientation), religion, age, national origin, disability, veteran status, genetic information, or any other classification protected by law. The broad set of measures assessed should include demographic performance measures, overall and subgroup parity assessment, and calibration. Demographic data collected for disparity assessment should be separated from data used for the automated system and privacy protections should be instituted; in some cases it may make sense to perform such assessment using a data sample. For every instance where the deployed automated system leads to different treatment or impacts disfavoring the identified groups, the entity governing, implementing, or using the system should document the disparity and a justification for any continued use of the system. Disparity Mitigation When a disparity assessment identifies a disparity against an assessed group, it may be appropriate to take steps to mitigate or eliminate the disparity. In some cases, mitigation or elimination of the disparity may be required by law.

208

White House Office of Science and Technology Policy (OSTP)

Disparities that have the potential to lead to algorithmic discrimination, cause meaningful harm, or violate equity48 goals should be mitigated. When designing and evaluating an automated system, steps should be taken to evaluate multiple models and select the one that has the least adverse impact, modify data input choices, or otherwise identify a system with fewer disparities. If adequate mitigation of the disparity is not possible, then the use of the automated system should be reconsidered. One of the considerations in whether to use the system should be the validity of any target measure; unobservable targets may result in the inappropriate use of proxies. Meeting these standards may require instituting mitigation procedures and other protective measures to address algorithmic discrimination, avoid meaningful harm, and achieve equity goals.

Ongoing Monitoring and Mitigation Automated systems should be regularly monitored to assess algorithmic discrimination that might arise from unforeseen interactions of the system with inequities not accounted for during the pre-deployment testing, changes to the system after deployment, or changes to the context of use or associated data. Monitoring and disparity assessment should be performed by the entity deploying or using the automated system to examine whether the system has led to algorithmic discrimination when deployed. This assessment should be performed regularly and whenever a pattern of unusual results is occurring. It can be performed using a variety of approaches, taking into account whether and how demographic information of impacted people is available, for example via testing with a sample of users or via qualitative user experience research. Riskier and higher-impact systems should be monitored and assessed more frequently. Outcomes of this assessment should include additional disparity mitigation, if needed, or fallback to earlier procedures in the case that equity standards are no longer met and can't be mitigated, and prior mechanisms provide better adherence to equity standards. Demonstrate That the System Protects against Algorithmic Discrimination Independent Evaluation As described in the section on Safe and Effective Systems, entities should allow independent evaluation of potential algorithmic discrimination caused 48

Id.

Blueprint for an AI Bill of Rights

209

by automated systems they use or oversee. In the case of public sector uses, these independent evaluations should be made public unless law enforcement or national security restrictions prevent doing so. Care should be taken to balance individual privacy with evaluation data access needs; in many cases, policy-based and/or technological innovations and controls allow access to such data without compromising privacy.

Reporting Entities responsible for the development or use of automated systems should provide reporting of an appropriately designed algorithmic impact assessment,49 with clear specification of who performs the assessment, who evaluates the system, and how corrective actions are taken (if necessary) in response to the assessment. This algorithmic impact assessment should include at least: the results of any consultation, design stage equity assessments (potentially including qualitative analysis), accessibility designs and testing, disparity testing, document any remaining disparities, and detail any mitigation implementation and assessments. This algorithmic impact assessment should be made public whenever possible. Reporting should be provided in a clear and machine-readable manner using plain language to allow for more straightforward public accountability.

How These Principles Can Move into Practice Real-life examples of how these principles can become reality, through laws, policies, and practical technical and sociotechnical approaches to protecting rights, opportunities, and access. 49

Various organizations have offered proposals for how such assessments might be designed. See, e.g., Emanuel Moss, Elizabeth Anne Watkins, Ranjit Singh, Madeleine Clare Elish, and Jacob Metcalf. Assembling Accountability: Algorithmic Impact Assessment for the Public Interest. Data & Society Research Institute Report. June 29, 2021. https://datasociety.net/library/assembling-accountability-algorithmic-impact-assessment-forthe-public-interest/; Nicol Turner Lee, Paul Resnick, and Genie Barton. Algorithmic bias detection and mitigation: Best practices and policies to reduce consumer harms. Brookings Report. May 22, 2019. https://www.brookings.edu/research/algorithmic-bias-detection-and-mitigation-bestpractices-and-policies-to-reduce-consumer-harms/; Andrew D. Selbst. An Institutional View Of Algorithmic Impact Assessments. Harvard Journal of Law & Technology. June 15, 2021. https://ssrn.com/abstract=3867634; Dillon Reisman, Jason Schultz, Kate Crawford, and Meredith Whittaker. Algorithmic Impact Assessments: A Practical Framework for Public Agency Accountability. AI Now Institute Report. April 2018. https://ainowinstitute. org/aiareport2018.pdf.

210

White House Office of Science and Technology Policy (OSTP)

The Federal Government Is Working to Combat Discrimination in Mortgage Lending The Department of Justice has launched a nationwide initiative to combat redlining, which includes reviewing how lenders who may be avoiding serving communities of color are conducting targeted marketing and advertising.50 This initiative will draw upon strong partnerships across federal agencies, including the Consumer Financial Protection Bureau and prudential regulators. The Action Plan to Advance Property Appraisal and Valuation Equity includes a commitment from the agencies that oversee mortgage lending to include a nondiscrimination standard in the proposed rules for Automated Valuation Models.51 The Equal Employment Opportunity Commission and the Department of Justice Have Clearly Laid out How Employers’ Use of AI and Other Automated Systems Can Result in Discrimination against Job Applicants and Employees with disabilities52 The documents explain how employers’ use of software that relies on algorithmic decision-making may violate existing requirements under Title I of the Americans with Disabilities Act (“ADA”). This technical assistance also provides practical tips to employers on how to comply with the ADA, and to job applicants and employees who think that their rights may have been violated. Disparity Assessments Identified Harms to Black Patients' Healthcare Access A widely used healthcare algorithm relied on the cost of each patient’s past medical care to predict future medical needs, recommending early interventions for the patients deemed most at risk. This process discriminated 50

Department of Justice. Justice Department Announces New Initiative to Combat Redlining. Oct. 22, 2021. https://www.justice.gov/opa/pr/justice-department-announces-new-initiativecombat-redlining. 51 PAVE Interagency Task Force on Property Appraisal and Valuation Equity. Action Plan to Advance Property Appraisal and Valuation Equity: Closing the Racial Wealth Gap by Addressing Mis-valuations for Families and Communities of Color. March 2022. https://pave.hud.gov/sites/pave.hud.gov/files/ documents/PAVEActionPlan.pdf. 52 U.S. Equal Employment Opportunity Commission. The Americans with Disabilities Act and the Use of Software, Algorithms, and Artificial Intelligence to Assess Job Applicants and Employees. EEOC-NVTA-2022-2. May 12, 2022. https://www.eeoc.gov/ laws/guidance/americans-disabilities-act-and-use-software-algorithms-and-artificialintelligence; U.S. Department of Justice. Algorithms, Artificial Intelligence, and Disability Discrimination in Hiring. May 12, 2022. https://beta.ada.gov/resources/ai-guidance/.

Blueprint for an AI Bill of Rights

211

against Black patients, who generally have less access to medical care and therefore have generated less cost than white patients with similar illness and need. A landmark study documented this pattern and proposed practical ways that were shown to reduce this bias, such as focusing specifically on active chronic health conditions or avoidable future costs related to emergency visits and hospitalization.53

Large Employers Have Developed Best Practices to Scrutinize the Data and Models Used for Hiring An industry initiative has developed Algorithmic Bias Safeguards for the Workforce, a structured questionnaire that businesses can use proactively when procuring software to evaluate workers. It covers specific technical questions such as the training data used, model training process, biases identified, and mitigation steps employed.54 Standards Organizations Have Developed Guidelines to Incorporate Accessibility Criteria into Technology Design Processes The most prevalent in the United States is the Access Board’s Section 508 regulations,55 which are the technical standards for federal information communication technology (software, hardware, and web). Other standards include those issued by the International Organization for Standardization,57 and the World Wide Web Consortium Web Content Accessibility Guidelines,56 a globally recognized voluntary consensus standard for web content and other information and communications technology.

53

Ziad Obermeyer, Brian Powers, Christine Vogeli, and Sendhil Mullainathan. Dissecting racial bias in an algorithm used to manage the health of populations. Science. Vol. 366, No. 6464. Oct. 25, 2019. https:// www.science.org/doi/10.1126/science.aax2342. 54 Data & Trust Alliance. Algorithmic Bias Safeguards for Workforce: Overview. Jan. 2022. https://dataandtrustalliance.org/Algorithmic_Bias_Safeguards_for_Workforce_Overview.pd f. 55 Section 508.gov. IT Accessibility Laws and Policies. Access Board. https://www.section508.gov/ manage/laws-and-policies/. 56 ISO Technical Management Board. ISO/IEC Guide 71:2014. Guide for addressing accessibility in standards. International Standards Organization. 2021. https://www.iso.org/standard/57385.html.

212

White House Office of Science and Technology Policy (OSTP)

NIST Has Released Special Publication 1270, towards a Standard for Identifying and Managing Bias in Artificial Intelligence57 The special publication: describes the stakes and challenges of bias in artificial intelligence and provides examples of how and why it can chip away at public trust; identifies three categories of bias in AI – systemic, statistical, and human – and describes how and where they contribute to harms; and describes three broad challenges for mitigating bias – datasets, testing and evaluation, and human factors – and introduces preliminary guidance for addressing them. Throughout, the special publication takes a socio-technical perspective to identifying and managing AI bias.

Data Privacy You Should Be Protected from Abusive Data Practices via Built-in Protections and You Should Have Agency over How Data About You Is Used You should be protected from violations of privacy through design choices that ensure such protections are included by default, including ensuring that data collection conforms to reasonable expectations and that only data strictly necessary for the specific context is collected. Designers, developers, and deployers of automated systems should seek your permission and respect your decisions regarding collection, use, access, transfer, and deletion of your data in appropriate ways and to the greatest extent possible; where not possible, alternative privacy by design safeguards should be used. Systems should not employ user experience and design decisions that obfuscate user choice or burden users with defaults that are privacy invasive. Consent should only be used to justify collection of data in cases where it can be appropriately and meaningfully given. Any consent requests should be brief, be understandable in plain language, and give you agency over data collection and the specific context of use; current hard-to-understand notice-and-choice practices for broad uses of data should be changed. Enhanced protections and restrictions for data and inferences related to sensitive domains, including health, work, education, criminal justice, and finance, and for data pertaining to youth should put you first. In sensitive domains, your data and related inferences 57

World Wide Web Consortium. Web Content Accessibility Guidelines (WCAG) 2.0. Dec. 11, 2008. https://www.w3.org/TR/WCAG20/.

Blueprint for an AI Bill of Rights

213

should only be used for necessary functions, and you should be protected by ethical review and use prohibitions. You and your communities should be free from unchecked surveillance; surveillance technologies should be subject to heightened oversight that includes at least pre-deployment assessment of their potential harms and scope limits to protect privacy and civil liberties. Continuous surveillance and monitoring should not be used in education, work, housing, or in other contexts where the use of such surveillance technologies is likely to limit rights, opportunities, or access. Whenever possible, you should have access to reporting that confirms your data decisions have been respected and provides an assessment of the potential impact of surveillance technologies on your rights, opportunities, or access.

Why This Principle Is Important This section provides a brief summary of the problems which the principle seeks to address and protect against, including illustrative examples.

Data privacy is a foundational and cross-cutting principle required for achieving all others in this framework. Surveillance and data collection, sharing, use, and reuse now sit at the foundation of business models across many industries, with more and more companies tracking the behavior of the American public, building individual profiles based on this data, and using this granular-level information as input into automated systems that further track, profile, and impact the American public. Government agencies, particularly law enforcement agencies, also use and help develop a variety of technologies that enhance and expand surveillance capabilities, which similarly collect data used as input into other automated systems that directly impact people’s lives. Federal law has not grown to address the expanding scale of private data collection, or of the ability of governments at all levels to access that data and leverage the means of private collection. Meanwhile, members of the American public are often unable to access their personal data or make critical decisions about its collection and use. Data brokers frequently collect consumer data from numerous sources without consumers’ permission or knowledge.58 Moreover, there is a risk that 58

Reva Schwartz, Apostol Vassilev, Kristen Greene, Lori Perine, and Andrew Bert. NIST Special Publication 1270: Towards a Standard for Identifying and Managing Bias in Artificial Intelligence. The National Institute of Standards and Technology. March, 2022. https://nvlpubs.nist.gov/nistpubs/ SpecialPublications/NIST.SP.1270.pdf.

214

White House Office of Science and Technology Policy (OSTP)

inaccurate and faulty data can be used to make decisions about their lives, such as whether they will qualify for a loan or get a job. Use of surveillance technologies has increased in schools and workplaces, and, when coupled with consequential management and evaluation decisions, it is leading to mental health harms such as lowered self-confidence, anxiety, depression, and a reduced ability to use analytical reasoning.59 Documented patterns show that personal data is being aggregated by data brokers to profile communities in harmful ways.60 The impact of all this data harvesting is corrosive, breeding distrust, anxiety, and other mental health problems; chilling speech, protest, and worker organizing; and threatening our democratic process.61 The American public should be protected from these growing risks. Increasingly, some companies are taking these concerns seriously and integrating mechanisms to protect consumer privacy into their products by design and by default, including by minimizing the data they collect, communicating collection and use clearly, and improving security practices. Federal government surveillance and other collection and use of data is governed by legal protections that help to protect civil liberties and provide for limits on data retention in some cases. Many states have also enacted consumer data privacy protection regimes to address some of these harms. See, e.g., the 2014 Federal Trade Commission report “Data Brokers A Call for Transparency and Accountability.” https://www.ftc.gov/system/files/documents/reports/data-brokers-calltransparency-accountability-report-federal-trade-commission-may2014/140527databrokerreport.pdf. 60 See, e.g., Nir Kshetri. School surveillance of students via laptops may do more harm than good. The Conversation. Jan. 21, 2022. https://theconversation.com/school-surveillanceof-students-via-laptops-may-do-more-harm-than-good-170983; Matt Scherer. Warning: Bossware May be Hazardous to Your Health. Center for Democracy & Technology Report. https://cdt.org/wp-content/uploads/2021/07/2021-07-29-Warning-Bossware-May-BeHazardous-To-Your-Health-Final.pdf; Human Impact Partners and WWRC. The Public Health Crisis Hidden in Amazon Warehouses. HIP and WWRC report. Jan. 2021. https://humanimpact.org/wp-content/uploads/2021/01/The-Public-Health-Crisis-HiddenIn-Amazon-Warehouses-HIP-WWRC-01-21.pdf; Drew Harwell. Contract lawyers face a growing invasion of surveillance programs that monitor their work. The Washington Post. Nov. 11, 2021. https://www.washingtonpost.com/technology/2021/11/11/lawyer-facialrecognition-monitoring/; Virginia Doellgast and Sean O'Brady. Making Call Center Jobs Better: The Relationship between Management Practices and Worker Stress. A Report for the CWA. June 2020. https:// hdl.handle.net/1813/74307. 61 See, e.g., Federal Trade Commission. Data Brokers: A Call for Transparency and Accountability. May 2014. https://www.ftc.gov/system/files/documents/reports/data-brokers-call-transparencyaccountability-report-federal-trade-commission-may-2014/140527databrokerreport.pdf; Cathy O’Neil. Weapons of Math Destruction. Penguin Books. 2017. https://en.wikipedia.org/wiki/ Weapons_of_Math_Destruction. 59

Blueprint for an AI Bill of Rights

215

However, these are not yet standard practices, and the United States lacks a comprehensive statutory or regulatory framework governing the rights of the public when it comes to personal data. While a patchwork of laws exists to guide the collection and use of personal data in specific contexts, including health, employment, education, and credit, it can be unclear how these laws apply in other contexts and in an increasingly automated society. Additional protections would assure the American public that the automated systems they use are not monitoring their activities, collecting information on their lives, or otherwise surveilling them without context-specific consent or legal authority.  





62

An insurer might collect data from a person's social media presence as part of deciding what life insurance rates they should be offered.62 A data broker harvested large amounts of personal data and then suffered a breach, exposing hundreds of thousands of people to potential identity theft.63 A local public housing authority installed a facial recognition system at the entrance to housing complexes to assist law enforcement with identifying individuals viewed via camera when police reports are filed, leading the community, both those living in the housing complex and not, to have videos of them sent to the local police department and made available for scanning by its facial recognition software.64 Companies use surveillance software to track employee discussions about union activity and use the resulting data to surveil individual employees and surreptitiously intervene in discussions.65

See, e.g., Rachel Levinson-Waldman, Harsha Pandurnga, and Faiza Patel. Social Media Surveillance by the U.S. Government. Brennan Center for Justice. Jan. 7, 2022. https://www.brennancenter.org/our-work/research-reports/social-media-surveillance-usgovernment; Shoshana Zuboff. The Age of Surveillance Capitalism: The Fight for a Human Future at the New Frontier of Power. Public Affairs. 2019. 63 Angela Chen. Why the Future of Life Insurance May Depend on Your Online Presence. The Verge. Feb. 7, 2019. https://www.theverge.com/2019/2/7/18211890/social-media-life-insurance-new-yorkalgorithms-big-data-discrimination-online-records. 64 See, e.g., Scott Ikeda. Major Data Broker Exposes 235 Million Social Media Profiles in Data Lead: Info Appears to Have Been Scraped Without Permission. CPO Magazine. Aug. 28, 2020. https://www.cpomagazine.com/cyber-security/major-data-broker-exposes-235million-social-media-profiles-in-data-leak/; Lily Hay Newman. 1.2 Billion Records Found Exposed Online in a Single Server. WIRED, Nov. 22, 2019. https://www.wired.com/story/ billion-records-exposed-online/. 65 Lola Fadulu. Facial Recognition Technology in Public Housing Prompts Backlash. New York Times. Sept. 24, 2019.

216

White House Office of Science and Technology Policy (OSTP)

What Should Be Expected of Automated Systems The expectations for automated systems are meant to serve as a blueprint for the development of additional technical standards and practices that are tailored for particular sectors and contexts.

Traditional terms of service—the block of text that the public is accustomed to clicking through when using a website or digital app—are not an adequate mechanism for protecting privacy. The American public should be protected via built-in privacy protections, data minimization, use and collection limitations, and transparency, in addition to being entitled to clear mechanisms to control access to and use of their data—including their metadata—in a proactive, informed, and ongoing way. Any automated system collecting, using, sharing, or storing personal data should meet these expectations.

Protect Privacy by Design and by Default Privacy by Design and by Default Automated systems should be designed and built with privacy protected by default. Privacy risks should be assessed throughout the development life cycle, including privacy risks from reidentification, and appropriate technical and policy mitigation measures should be implemented. This includes potential harms to those who are not users of the automated system, but who may be harmed by inferred data, purposeful privacy violations, or community surveillance or other community harms. Data collection should be minimized and clearly communicated to the people whose data is collected. Data should only be collected or used for the purposes of training or testing machine learning models if such collection and use is legal and consistent with the expectations of the people whose data is collected. User experience research should be conducted to confirm that people understand what data is being collected about them and how it will be used, and that this collection matches their expectations and desires.

https://www.nytimes.com/2019/09/24/us/politics/facial-recognition-technologyhousing.html.

Blueprint for an AI Bill of Rights

217

Data Collection and Use-Case Scope Limits Data collection should be limited in scope, with specific, narrow identified goals, to avoid "mission creep." Anticipated data collection should be determined to be strictly necessary to the identified goals and should be minimized as much as possible. Data collected based on these identified goals and for a specific context should not be used in a different context without assessing for new privacy risks and implementing appropriate mitigation measures, which may include express consent. Clear timelines for data retention should be established, with data deleted as soon as possible in accordance with legal or policy-based limitations. Determined data retention timelines should be documented and justified. Risk Identification and Mitigation Entities that collect, use, share, or store sensitive data should attempt to proactively identify harms and seek to manage them so as to avoid, mitigate, and respond appropriately to identified risks. Appropriate responses include determining not to process data when the privacy risks outweigh the benefits or implementing measures to mitigate acceptable risks. Appropriate responses do not include sharing or transferring the privacy risks to users via notice or consent requests where users could not reasonably be expected to understand the risks without further support. Privacy-Preserving Security Entities creating, using, or governing automated systems should follow privacy and security best practices designed to ensure data and metadata do not leak beyond the specific consented use case. Best practices could include using privacy-enhancing cryptography or other types of privacy-enhancing technologies or fine-grained permissions and access control mechanisms, along with conventional system security protocols. Protect the Public from Unchecked Surveillance Heightened Oversight of Surveillance Surveillance or monitoring systems should be subject to heightened oversight that includes at a minimum assessment of potential harms during design (before deployment) and in an ongoing manner, to ensure that the American public’s rights, opportunities, and access are protected. This assessment should be done before deployment and should give special attention to ensure

218

White House Office of Science and Technology Policy (OSTP)

there is not algorithmic discrimination, especially based on community membership, when deployed in a specific real-world context. Such assessment should then be reaffirmed in an ongoing manner as long as the system is in use.

Limited and Proportionate Surveillance Surveillance should be avoided unless it is strictly necessary to achieve a legitimate purpose and it is proportionate to the need. Designers, developers, and deployers of surveillance systems should use the least invasive means of monitoring available and restrict monitoring to the minimum number of subjects possible. To the greatest extent possible consistent with law enforcement and national security needs, individuals subject to monitoring should be provided with clear and specific notice before it occurs and be informed about how the data gathered through surveillance will be used. Scope Limits on Surveillance to Protect Rights and Democratic Values Civil liberties and civil rights must not be limited by the threat of surveillance or harassment facilitated or aided by an automated system. Surveillance systems should not be used to monitor the exercise of democratic rights, such as voting, privacy, peaceful assembly, speech, or association, in a way that limits the exercise of civil rights or civil liberties. Information about or algorithmically-determined assumptions related to identity should be carefully limited if used to target or guide surveillance systems in order to avoid algorithmic discrimination; such identity-related information includes group characteristics or affiliations, geographic designations, location-based and association-based inferences, social networks, and biometrics. Continuous surveillance and monitoring systems should not be used in physical or digital workplaces (regardless of employment status), public educational institutions, and public accommodations. Continuous surveillance and monitoring systems should not be used in a way that has the effect of limiting access to critical resources or services or suppressing the exercise of rights, even where the organization is not under a particular duty to protect those rights.

Blueprint for an AI Bill of Rights

219

Provide the Public with Mechanisms for Appropriate and Meaningful Consent, Access, and Control over Their Data Use-Specific Consent Consent practices should not allow for abusive surveillance practices. Where data collectors or automated systems seek consent, they should seek it for specific, narrow use contexts, for specific time durations, and for use by specific entities. Consent should not extend if any of these conditions change; consent should be re-acquired before using data if the use case changes, a time limit elapses, or data is transferred to another entity (including being shared or sold). Consent requested should be limited in scope and should not request consent beyond what is required. Refusal to provide consent should be allowed, without adverse effects, to the greatest extent possible based on the needs of the use case. Brief and Direct Consent Requests When seeking consent from users short, plain language consent requests should be used so that users understand for what use contexts, time span, and entities they are providing data and metadata consent. User experience research should be performed to ensure these consent requests meet performance standards for readability and comprehension. This includes ensuring that consent requests are accessible to users with disabilities and are available in the language(s) and reading level appropriate for the audience. User experience design choices that intentionally obfuscate or manipulate user choice (i.e., “dark patterns”) should be not be used.66 Data Access and Correction People whose data is collected, used, shared, or stored by automated systems should be able to access data and metadata about themselves, know who has access to this data, and be able to correct it if necessary. Entities should receive consent before sharing data with other entities and should keep records of what data is shared and with whom.

66

Jo Constantz. ‘They Were Spying On Us’: Amazon, Walmart, Use Surveillance Technology to Bust Unions. Newsweek. Dec. 13, 2021. https://www.newsweek.com/they-were-spying-us-amazon-walmart-use-surveillancetechnology-bust-unions-1658603.

220

White House Office of Science and Technology Policy (OSTP)

Consent Withdrawal and Data Deletion Entities should allow (to the extent legally permissible) withdrawal of data access consent, resulting in the deletion of user data, metadata, and the timely removal of their data from any systems (e.g., machine learning models) derived from that data.67 Automated System Support Entities designing, developing, and deploying automated systems should establish and maintain the capabilities that will allow individuals to use their own automated systems to help them make consent, access, and control decisions in a complex data ecosystem. Capabilities include machine readable data, standardized data formats, metadata or tags for expressing data processing permissions and preferences and data provenance and lineage, context of use and access-specific tags, and training models for assessing privacy risk. Demonstrate That Data Privacy and User Control Are Protected Independent Evaluation As described in the section on Safe and Effective Systems, entities should allow independent evaluation of the claims made regarding data policies. These independent evaluations should be made public whenever possible. Care will need to be taken to balance individual privacy with evaluation data access needs. Reporting When members of the public wish to know what data about them is being used in a system, the entity responsible for the development of the system should respond quickly with a report on the data it has collected or stored about them. Such a report should be machine-readable, understandable by most users, and include, to the greatest extent allowable under law, any data and metadata about them or collected from them, when and how their data and metadata were collected, the specific ways that data or metadata are being used, who has access to their data and metadata, and what time limitations apply to these 67

See, e.g., enforcement actions by the FTC against the photo storage app Everalbaum (https://www.ftc.gov/legal-library/browse/cases-proceedings/192-3172-everalbum-incmatter), and against Weight Watchers and their subsidiary Kurbo (https://www.ftc.gov/legal-library/browse/cases-proceedings/1923228-weightwatchersww).

Blueprint for an AI Bill of Rights

221

data. In cases where a user login is not available, identity verification may need to be performed before providing such a report to ensure user privacy. Additionally, summary reporting should be proactively made public with general information about how peoples’ data and metadata is used, accessed, and stored. Summary reporting should include the results of any surveillance pre-deployment assessment, including disparity assessment in the real-world deployment context, the specific identified goals of any data collection, and the assessment done to ensure only the minimum required data is collected. It should also include documentation about the scope limit assessments, including data retention timelines and associated justification, and an assessment of the impact of surveillance or data collection on rights, opportunities, and access. Where possible, this assessment of the impact of surveillance should be done by an independent party. Reporting should be provided in a clear and machine-readable manner.

Extra Protections for Data Related to Sensitive Domains Some domains, including health, employment, education, criminal justice, and personal finance, have long been singled out as sensitive domains deserving of enhanced data protections. This is due to the intimate nature of these domains as well as the inability of individuals to opt out of these domains in any meaningful way, and the historical discrimination that has often accompanied data knowledge.68 Domains understood by the public to be sensitive also change over time, including because of technological developments. Tracking and monitoring technologies, personal tracking devices, and our extensive data footprints are used and misused more than ever before; as such, the protections afforded by current legal guidelines may be inadequate. The American public deserves assurances that data related to such sensitive domains is protected and used appropriately and only in narrowly defined contexts with clear benefits to the individual and/or society. To this end, automated systems that collect, use, share, or store data related to these sensitive domains should meet additional expectations. Data and metadata are sensitive if they pertain to an individual in a sensitive domain (defined below); are generated by technologies used in a sensitive domain; can 68

See, e.g., HIPAA, Pub. L 104-191 (1996); Fair Debt Collection Practices Act (FDCPA), Pub. L. 95-109 (1977); Family Educational Rights and Privacy Act (FERPA) (20 U.S.C. § 1232g), Children's Online Privacy Protection Act of 1998, 15 U.S.C. 6501–6505, and Confidential Information Protection and Statistical Efficiency Act (CIPSEA) (116 Stat. 2899).

222

White House Office of Science and Technology Policy (OSTP)

be used to infer data from a sensitive domain or sensitive data about an individual (such as disability-related data, genomic data, biometric data, behavioral data, geolocation data, data related to interaction with the criminal justice system, relationship history and legal status such as custody and divorce information, and home, work, or school environmental data); or have the reasonable potential to be used in ways that are likely to expose individuals to meaningful harm, such as a loss of privacy or financial harm due to identity theft. Data and metadata generated by or about those who are not yet legal adults is also sensitive, even if not related to a sensitive domain. Such data includes, but is not limited to, numerical, text, image, audio, or video data. “Sensitive domains” are those in which activities being conducted can cause material harms, including significant adverse effects on human rights such as autonomy and dignity, as well as civil liberties and civil rights. Domains that have historically been singled out as deserving of enhanced data protections or where such enhanced protections are reasonably expected by the public include, but are not limited to, health, family planning and care, employment, education, criminal justice, and personal finance. In the context of this framework, such domains are considered sensitive whether or not the specifics of a system context would necessitate coverage under existing law, and domains and data that are considered sensitive are understood to change over time based on societal norms and context. 



69

Continuous positive airway pressure machines gather data for medical purposes, such as diagnosing sleep apnea, and send usage data to a patient’s insurance company, which may subsequently deny coverage for the device based on usage data. Patients were not aware that the data would be used in this way or monitored by anyone other than their doctor.69 A department store company used predictive analytics applied to collected consumer data to determine that a teenage girl was pregnant, and sent maternity clothing ads and other baby-related advertisements to her house, revealing to her father that she was pregnant.70

Marshall Allen. You Snooze, You Lose: Insurers Make The Old Adage Literally True. ProPublica. Nov. 21, 2018. https://www.propublica.org/article/you-snooze-you-lose-insurers-make-the-old-adageliterally-true. 70 Charles Duhigg. How Companies Learn Your Secrets. The New York Times. Feb. 16, 2012. https://www.nytimes.com/2012/02/19/magazine/shopping-habits.html.

Blueprint for an AI Bill of Rights



 

71

223

School audio surveillance systems monitor student conversations to detect potential "stress indicators" as a warning of potential violence.71 Online proctoring systems claim to detect if a student is cheating on an exam using biometric markers.72 These systems have the potential to limit student freedom to express a range of emotions at school and may inappropriately flag students with disabilities who need accommodations or use screen readers or dictation software as cheating.73 Location data, acquired from a data broker, can be used to identify people who visit abortion clinics.74 Companies collect student data such as demographic information, free or reduced lunch status, whether they've used drugs, or whether they've expressed interest in LGBTQI+ groups, and then use that data to forecast student success.75 Parents and education experts have expressed concern about collection of such sensitive data without

Jack Gillum and Jeff Kao. Aggression Detectors: The Unproven, Invasive Surveillance Technology Schools are Using to Monitor Students. ProPublica. Jun. 25, 2019. https://features.propublica.org/aggression-detector/the-unproven-invasive-surveillancetechnology-schools-are-using-to-monitor-students/. 72 Drew Harwell. Cheating-detection companies made millions during the pandemic. Now students are fighting back. Washington Post. Nov. 12, 2020. https://www.washington post.com/technology/2020/11/12/test-monitoring-student-revolt/. 73 See, e.g., Heather Morrison. Virtual Testing Puts Disabled Students at a Disadvantage. Government Technology. May 24, 2022. https://www.govtech.com/education/k-12/virtual-testing-puts-disabled-students-at-adisadvantage; Lydia X. Z. Brown, Ridhi Shetty, Matt Scherer, and Andrew Crawford. Ableism And Disability Discrimination In New Surveillance Technologies: How new surveillance technologies in education, policing, health care, and the workplace disproportionately harm disabled people. Center for Democracy and Technology Report. May 24, 2022. https://cdt.org/insights/ableism-and-disability-discrimination-in-new-surveillancetechnologies-how-new-surveillance-technologies-in-education-policing-health-care-and-theworkplace-disproportionately-harm-disabled-people/. 74 See., e.g., Sam Sabin. Digital surveillance in a post-Roe world. Politico. May 5, 2022. https:// www.politico.com/newsletters/digital-future-daily/2022/05/05/digital-surveillance-in-a-postroe-world-00030459; Federal Trade Commission. FTC Sues Kochava for Selling Data that Tracks People at Reproductive Health Clinics, Places of Worship, and Other Sensitive Locations. Aug. 29, 2022. https://www.ftc.gov/news-events/news/press-releases/2022/08/ ftc-sues-kochava-selling-data-tracks-people-reproductive-health-clinics-places-worshipother. 75 Todd Feathers. This Private Equity Firm Is Amassing Companies That Collect Data on America’s Children. The Markup. Jan. 11, 2022. https://themarkup.org/machine-learning/2022/01/11/this-private-equity-firm-is-amassingcompanies-that-collect-data-on-americas-children.

224

White House Office of Science and Technology Policy (OSTP)



express parental consent, the lack of transparency in how such data is being used, and the potential for resulting discriminatory impacts. Many employers transfer employee data to third party job verification services. This information is then used by potential future employers, banks, or landlords. In one case, a former employee alleged that a company supplied false data about her job title which resulted in a job offer being revoked.76

What Should Be Expected of Automated Systems The expectations for automated systems are meant to serve as a blueprint for the development of additional technical standards and practices that are tailored for particular sectors and contexts.

In addition to the privacy expectations above for general non-sensitive data, any system collecting, using, sharing, or storing sensitive data should meet the expectations below. Depending on the technological use case and based on an ethical assessment, consent for sensitive data may need to be acquired from a guardian and/or child.

Provide Enhanced Protections for Data Related to Sensitive Domains Necessary Functions Only Sensitive data should only be used for functions strictly necessary for that domain or for functions that are required for administrative reasons (e.g., school attendance records), unless consent is acquired, if appropriate, and the additional expectations in this section are met. Consent for nonnecessary functions should be optional, i.e., should not be required, incentivized, or coerced in order to receive opportunities or access to services. In cases where data is provided to an entity (e.g., health insurance company) in order to facilitate payment for such a need, that data should only be used for that purpose.

76

Reed Albergotti. Every employee who leaves Apple becomes an ‘associate’: In job databases used by employers to verify resume information, every former Apple employee’s title gets erased and replaced with a generic title. The Washington Post. Feb. 10, 2022. https://www.washingtonpost.com/technology/2022/02/10/apple-associate/.

Blueprint for an AI Bill of Rights

225

Ethical Review and Use Prohibitions Any use of sensitive data or decision process based in part on sensitive data that might limit rights, opportunities, or access, whether the decision is automated or not, should go through a thorough ethical review and monitoring, both in advance and by periodic review (e.g., via an independent ethics committee or similarly robust process). In some cases, this ethical review may determine that data should not be used or shared for specific uses even with consent. Some novel uses of automated systems in this context, where the algorithm is dynamically developing and where the science behind the use case is not well established, may also count as human subject experimentation, and require special review under organizational compliance bodies applying medical, scientific, and academic human subject experimentation ethics rules and governance procedures. Data Quality In sensitive domains, entities should be especially careful to maintain the quality of data to avoid adverse consequences arising from decision-making based on flawed or inaccurate data. Such care is necessary in a fragmented, complex data ecosystem and for datasets that have limited access such as for fraud prevention and law enforcement. It should be not left solely to individuals to carry the burden of reviewing and correcting data. Entities should conduct regular, independent audits and take prompt corrective measures to maintain accurate, timely, and complete data. Limit Access to Sensitive Data and Derived Data Sensitive data and derived data should not be sold, shared, or made public as part of data brokerage or other agreements. Sensitive data includes data that can be used to infer sensitive information; even systems that are not directly marketed as sensitive domain technologies are expected to keep sensitive data private. Access to such data should be limited based on necessity and based on a principle of local control, such that those individuals closest to the data subject have more access while those who are less proximate do not (e.g., a teacher has access to their students’ daily progress data while a superintendent does not). Reporting In addition to the reporting on data privacy (as listed above for non-sensitive data), entities developing technologies related to a sensitive domain and those

226

White House Office of Science and Technology Policy (OSTP)

collecting, using, storing, or sharing sensitive data should, whenever appropriate, regularly provide public reports describing: any data security lapses or breaches that resulted in sensitive data leaks; the number, type, and outcomes of ethical pre-reviews undertaken; a description of any data sold, shared, or made public, and how that data was assessed to determine it did not present a sensitive data risk; and ongoing risk identification and management procedures, and any mitigation added based on these procedures. Reporting should be provided in a clear and machine-readable manner.

How These Principles Can Move into Practice Real-life examples of how these principles can become reality, through laws, policies, and practical technical and sociotechnical approaches to protecting rights, opportunities, and access.

The Privacy Act of 1974 Requires Privacy Protections for Personal Information in Federal Records Systems, Including Limits on Data Retention, and Also Provides Individuals a General Right to Access and Correct Their Data Among other things, the Privacy Act limits the storage of individual information in federal systems of records, illustrating the principle of limiting the scope of data retention. Under the Privacy Act, federal agencies may only retain data about an individual that is “relevant and necessary” to accomplish an agency’s statutory purpose or to comply with an Executive Order of the President. The law allows for individuals to be able to access any of their individual information stored in a federal system of records, if not included under one of the systems of records exempted pursuant to the Privacy Act. In these cases, federal agencies must provide a method for an individual to determine if their personal information is stored in a particular system of records, and must provide procedures for an individual to contest the contents of a record about them. Further, the Privacy Act allows for a cause of action for an individual to seek legal relief if a federal agency does not comply with the Privacy Act’s requirements. Among other things, a court may order a federal agency to amend or correct an individual’s information in its records or award monetary damages if an inaccurate, irrelevant, untimely, or incomplete record results in an adverse determination about an individual’s “qualifications, character, rights, … opportunities…, or benefits.”

Blueprint for an AI Bill of Rights

227

NIST’s Privacy Framework Provides a Comprehensive, Detailed and Actionable Approach for Organizations to Manage Privacy Risks The NIST Framework gives organizations ways to identify and communicate their privacy risks and goals to support ethical decision-making in system, product, and service design or deployment, as well as the measures they are taking to demonstrate compliance with applicable laws or regulations. It has been voluntarily adopted by organizations across many different sectors around the world.77 A School Board’s Attempt to Surveil Public School Students— Undertaken without Adequate Community Input—Sparked a StateWide Biometrics Moratorium78 Reacting to a plan in the city of Lockport, New York, the state’s legislature banned the use of facial recognition systems and other “biometric identifying technology” in schools until July 1, 2022.79 The law additionally requires that a report on the privacy, civil rights, and civil liberties implications of the use of such technologies be issued before biometric identification technologies can be used in New York schools. Federal Law Requires Employers, and Any Consultants They May Retain, to Report the Costs of Surveilling Employees in the Context of a Labor Dispute, Providing a Transparency Mechanism to Help Protect Worker Organizing Employers engaging in workplace surveillance "where an object there-of, directly or indirectly, is […] to obtain information concerning the activities of employees or a labor organization in connection with a labor dispute" must report expenditures relating to this surveillance to the Department of Labor

77

National Institute of Standards and Technology. Privacy Framework Perspectives and Success Stories. Accessed May 2, 2022. https://www.nist.gov/privacy-framework/getting-started-0/perspectives-and-success-stories. 78 ACLU of New York. What You Need to Know About New York’s Temporary Ban on Facial Recognition in Schools. Accessed May 2, 2022. https://www.nyclu.org/en/publications/what-you-need-know-about-new-yorks-temporaryban-facial-recognition-schools. 79 New York State Assembly. Amendment to Education Law. Enacted Dec. 22, 2020. https://nyassembly.gov/leg/?default_fld=&leg_video=&bn=S05140&term=2019&Summary =Y&Text=Y.

228

White House Office of Science and Technology Policy (OSTP)

Office of Labor-Management Standards, and consultants who employers retain for these purposes must also file reports regarding their activities.80

Privacy Choices on Smartphones Show That When Technologies Are Well Designed, Privacy and Data Agency Can Be Meaningful and Not Overwhelming These choices—such as contextual, timely alerts about location tracking—are brief, direct, and use-specific. Many of the expectations listed here for privacy by design and use-specific consent mirror those distributed to developers as best practices when developing for smart phone devices,81 such as being transparent about how user data will be used, asking for app permissions during their use so that the use-context will be clear to users, and ensuring that the app will still work if users deny (or later revoke) some permissions.

Notice and Explanation You Should Know That an Automated System Is Being Used, and Understand How and Why It Contributes to Outcomes That Impact You Designers, developers, and deployers of automated systems should provide generally accessible plain language documentation including clear descriptions of the overall system functioning and the role automation plays, notice that such systems are in use, the individual or organization responsible for the system, and explanations of outcomes that are clear, timely, and accessible. Such notice should be kept up-to-date and people impacted by the system should be notified of significant use case or key functionality changes. You should know how and why an outcome impacting you was determined by an automated system, including when the automated system is not the sole input determining the outcome. Automated systems should provide 80

81

U.S Department of Labor. Labor-Management Reporting and Disclosure Act of 1959, As Amended. https://www.dol.gov/agencies/olms/laws/labor-management-reporting-anddisclosure-act (Section 203). See also: U.S Department of Labor. Form LM-10. OLMS Fact Sheet, Accessed May 2, 2022. https://www.dol.gov/sites/dolgov/files/OLMS/regs/ compliance/LM-10_factsheet.pdf. See, e.g., Apple. Protecting the User’s Privacy. Accessed May 2, 2022. https://developer.apple.com/documentation/uikit/protecting_the_user_s_privacy; Google Developers. Design for Safety: Android is secure by default and private by design. Accessed May 3, 2022. https://developer.android.com/design-for-safety.

Blueprint for an AI Bill of Rights

229

explanations that are technically valid, meaningful and useful to you and to any operators or others who need to understand the system, and calibrated to the level of risk based on the context. Reporting that includes summary information about these automated systems in plain language and assessments of the clarity and quality of the notice and explanations should be made public whenever possible.

Why This Principle Is Important This section provides a brief summary of the problems which the principle seeks to address and protect against, including illustrative examples.

Automated systems now determine opportunities, from employment to credit, and directly shape the American public’s experiences, from the courtroom to online classrooms, in ways that profoundly impact people’s lives. But this expansive impact is not always visible. An applicant might not know whether a person rejected their resume or a hiring algorithm moved them to the bottom of the list. A defendant in the courtroom might not know if a judge denying their bail is informed by an automated system that labeled them “high risk.” From correcting errors to contesting decisions, people are often denied the knowledge they need to address the impact of automated systems on their lives. Notice and explanations also serve an important safety and efficacy purpose, allowing experts to verify the reasonableness of a recommendation before enacting it. In order to guard against potential harms, the American public needs to know if an automated system is being used. Clear, brief, and understandable notice is a prerequisite for achieving the other protections in this framework. Likewise, the public is often unable to ascertain how or why an automated system has made a decision or contributed to a particular outcome. The decision-making processes of automated systems tend to be opaque, complex, and, therefore, unaccountable, whether by design or by omission. These factors can make explanations both more challenging and more important, and should not be used as a pretext to avoid explaining important decisions to the people impacted by those choices. In the context of automated systems, clear and valid explanations should be recognized as a baseline requirement. Providing notice has long been a standard practice, and in many cases is a legal requirement, when, for example, making a video recording of someone (outside of a law enforcement or national security context). In some cases,

230

White House Office of Science and Technology Policy (OSTP)

such as credit, lenders are required to provide notice and explanation to consumers. Techniques used to automate the process of explaining such systems are under active research and improvement and such explanations can take many forms. Innovative companies and researchers are rising to the challenge and creating and deploying explanatory systems that can help the public better understand decisions that impact them. While notice and explanation requirements are already in place in some sectors or situations, the American public deserve to know consistently and across sectors if an automated system is being used in a way that impacts their rights, opportunities, or access. This knowledge should provide confidence in how the public is being treated, and trust in the validity and reasonable use of automated systems. 





82

A lawyer representing an older client with disabilities who had been cut off from Medicaid-funded home health-care assistance couldn't determine why, especially since the decision went against historical access practices. In a court hearing, the lawyer learned from a witness that the state in which the older client lived had recently adopted a new algorithm to determine eligibility.82 The lack of a timely explanation made it harder to understand and contest the decision. A formal child welfare investigation is opened against a parent based on an algorithm and without the parent ever being notified that data was being collected and used as part of an algorithmic child maltreatment risk assessment.83 The lack of notice or an explanation makes it harder for those performing child maltreatment assessments to validate the risk assessment and denies parents knowledge that could help them contest a decision. A predictive policing system claimed to identify individuals at greatest risk to commit or become the victim of gun violence (based on automated analysis of social ties to gang members, criminal histories, previous experiences of gun violence, and other factors) and led to individuals being placed on a watch list with no explanation or public transparency regarding how the system came to its

Karen Hao. The coming war on the hidden algorithms that trap people in poverty. MIT Tech Review. Dec. 4, 2020. https://www.technologyreview.com/2020/12/04/1013068/ algorithms-create-a-poverty-trap-lawyers-fight-back/. 83 Anjana Samant, Aaron Horowitz, Kath Xu, and Sophie Beiers. Family Surveillance by Algorithm. ACLU. Accessed May 2, 2022. https://www.aclu.org/fact-sheet/familysurveillance-algorithm.

Blueprint for an AI Bill of Rights



231

conclusions.84 Both police and the public deserve to understand why and how such a system is making these determinations. A system awarding benefits changed its criteria invisibly. Individuals were denied benefits due to data entry errors and other system flaws. These flaws were only revealed when an explanation of the system was demanded and produced.85 The lack of an explanation made it harder for errors to be corrected in a timely manner.

What Should Be Expected of Automated Systems The expectations for automated systems are meant to serve as a blueprint for the development of additional technical standards and practices that are tailored for particular sectors and contexts.

An automated system should provide demonstrably clear, timely, understandable, and accessible notice of use, and explanations as to how and why a decision was made or an action was taken by the system. These expectations are explained below.

Provide Clear, Timely, Understandable, and Accessible Notice of Use and Explanations Generally Accessible Plain Language Documentation The entity responsible for using the automated system should ensure that documentation describing the overall system (including any human components) is public and easy to find. The documentation should describe, in plain language, how the system works and how any automated component is used to determine an action or decision. It should also include expectations about reporting described throughout this framework, such as the algorithmic impact assessments described as part of Algorithmic Discrimination Protections.

84

85

Mick Dumke and Frank Main. A look inside the watch list Chicago police fought to keep secret. The Chicago Sun Times. May 18, 2017. https://chicago.suntimes.com/ 2017/5/18/18386116/a-look-inside-the-watch-list-chicago-police-fought-to-keep-secret. Jay Stanley. Pitfalls of Artificial Intelligence Decisionmaking Highlighted In Idaho ACLU Case. ACLU. Jun. 2, 2017. https://www.aclu.org/blog/privacy-technology/pitfallsartificial-intelligence-decisionmaking-highlighted-idaho-aclu-case.

232

White House Office of Science and Technology Policy (OSTP)

Accountable Notices should clearly identify the entity responsible for designing each component of the system and the entity using it. Timely and up-to-Date Users should receive notice of the use of automated systems in advance of using or while being impacted by the technology. An explanation should be available with the decision itself, or soon thereafter. Notice should be kept upto-date and people impacted by the system should be notified of use case or key functionality changes. Brief and Clear Notices and explanations should be assessed, such as by research on users’ experiences, including user testing, to ensure that the people using or impacted by the automated system are able to easily find notices and explanations, read them quickly, and understand and act on them. This includes ensuring that notices and explanations are accessible to users with disabilities and are available in the language(s) and reading level appropriate for the audience. Notices and explanations may need to be available in multiple forms, (e.g., on paper, on a physical sign, or online), in order to meet these expectations and to be accessible to the American public. Provide Explanations as to How and Why a Decision Was Made or an Action Was Taken by an Automated System Tailored to the Purpose Explanations should be tailored to the specific purpose for which the user is expected to use the explanation, and should clearly state that purpose. An informational explanation might differ from an explanation provided to allow for the possibility of recourse, an appeal, or one provided in the context of a dispute or contestation process. For the purposes of this framework, 'explanation' should be construed broadly. An explanation need not be a plainlanguage statement about causality but could consist of any mechanism that allows the recipient to build the necessary understanding and intuitions to achieve the stated purpose. Tailoring should be assessed (e.g., via user experience research).

Blueprint for an AI Bill of Rights

233

Tailored to the Target of the Explanation Explanations should be targeted to specific audiences and clearly state that audience. An explanation provided to the subject of a decision might differ from one provided to an advocate, or to a domain expert or decision maker. Tailoring should be assessed (e.g., via user experience research). Tailored to the Level of Risk An assessment should be done to determine the level of risk of the automated system. In settings where the consequences are high as determined by a risk assessment, or extensive oversight is expected (e.g., in criminal justice or some public sector settings), explanatory mechanisms should be built into the system design so that the system’s full behavior can be explained in advance (i.e., only fully transparent models should be used), rather than as an after-thedecision interpretation. In other settings, the extent of explanation provided should be tailored to the risk level. Valid The explanation provided by a system should accurately reflect the factors and the influences that led to a particular decision, and should be meaningful for the particular customization based on purpose, target, and level of risk. While approximation and simplification may be necessary for the system to succeed based on the explanatory purpose and target of the explanation, or to account for the risk of fraud or other concerns related to revealing decision-making information, such simplifications should be done in a scientifically supportable way. Where appropriate based on the explanatory system, error ranges for the explanation should be calculated and included in the explanation, with the choice of presentation of such information balanced with usability and overall interface complexity concerns. Demonstrate Protections for Notice and Explanation Reporting Summary reporting should document the determinations made based on the above considerations, including: the responsible entities for accountability purposes; the goal and use cases for the system, identified users, and impacted populations; the assessment of notice clarity and timeliness; the assessment of the explanation's validity and accessibility; the assessment of the level of risk; and the account and assessment of how explanations are tailored, including to

234

White House Office of Science and Technology Policy (OSTP)

the purpose, the recipient of the explanation, and the level of risk. Individualized profile information should be made readily available to the greatest extent possible that includes explanations for any system impacts or inferences. Reporting should be provided in a clear plain language and machine-readable manner.

How These Principles Can Move into Practice Real-Life Examples of How These Principles Can Become Reality, Through Laws, Policies, and Practical Technical and Sociotechnical Approaches to Protecting Rights, Opportunities, and Access People in Illinois Are Given Written Notice by the Private Sector if Their Biometric Information Is Used The Biometric Information Privacy Act enacted by the state contains a number of provisions concerning the use of individual biometric data and identifiers. Included among them is a provision that no private entity may "collect, capture, purchase, receive through trade, or otherwise obtain" such information about an individual, unless written notice is provided to that individual or their legally appointed representative.86 Major Technology Companies Are Piloting New Ways to Communicate with the Public About Their Automated Technologies For example, a collection of non-profit organizations and companies have worked together to develop a framework that defines operational approaches to transparency for machine learning systems.87 This framework, and others like it,88 inform the public about the use of these tools, going beyond simple notice to include reporting elements such as safety evaluations, disparity assessments, and explanations of how the systems work. 86

87

88

Illinois General Assembly. Biometric Information Privacy Act. Effective Oct. 3, 2008. https://www.ilga.gov/legislation/ilcs/ilcs3.asp?ActID=3004&ChapterID=57. Partnership on AI. ABOUT ML Reference Document. Accessed May 2, 2022. https://partnershiponai.org/paper/about-ml-reference-document/1/. See, e.g., the model cards framework: Margaret Mitchell, Simone Wu, Andrew Zaldivar, Parker Barnes, Lucy Vasserman, Ben Hutchinson, Elena Spitzer, Inioluwa Deborah Raji, and Timnit Gebru. Model Cards for Model Reporting. In Proceedings of the Conference on Fairness, Accountability, and Transparency (FAT* '19). Association for Computing Machinery, New York, NY, USA, 220–229. https://dl.acm.org/doi/10.1145/ 3287560.3287596.

Blueprint for an AI Bill of Rights

235

Lenders Are Required by Federal Law to Notify Consumers About Certain Decisions Made About Them Both the Fair Credit Reporting Act and the Equal Credit Opportunity Act require in certain circumstances that consumers who are denied credit receive "adverse action" notices. Anyone who relies on the information in a credit report to deny a consumer credit must, under the Fair Credit Reporting Act, provide an "adverse action" notice to the consumer, which includes "notice of the reasons a creditor took adverse action on the application or on an existing credit account."89 In addition, under the risk-based pricing rule,90 lenders must either inform borrowers of their credit score, or else tell consumers when "they are getting worse terms because of information in their credit report." The CFPB has also asserted that "[t]he law gives every applicant the right to a specific explanation if their application for credit was denied, and that right is not diminished simply because a company uses a complex algorithm that it doesn't understand."91 Such explanations illustrate a shared value that certain decisions need to be explained. A California Law Requires That Warehouse Employees Are Provided with Notice and Explanation About Quotas, Potentially Facilitated by Automated Systems, That Apply to Them Warehousing employers in California that use quota systems (often facilitated by algorithmic monitoring systems) are required to provide employees with a written description of each quota that applies to the employee, including “quantified number of tasks to be performed or materials to be produced or handled, within the defined time period, and any potential adverse employment action that could result from failure to meet the quota.”92 89

Sarah Ammermann. Adverse Action Notice Requirements Under the ECOA and the FCRA. Consumer Compliance Outlook. Second Quarter 2013. https://consumercompliance outlook.org/2013/second-quarter/adverse-action-notice-requirements-under-ecoa-fcra/. 90 Federal Trade Commission. Using Consumer Reports for Credit Decisions: What to Know About Adverse Action and Risk-Based Pricing Notices. Accessed May 2, 2022. https://www.ftc.gov/business-guidance/resources/using-consumer-reports-credit-decisionswhat-know-about-adverse-action-risk-based-pricing-notices#risk. 91 Consumer Financial Protection Bureau. CFPB Acts to Protect the Public from Black-Box Credit Models Using Complex Algorithms. May 26, 2022. https://www.consumerfinance. gov/about-us/newsroom/cfpb-acts-to-protect-the-public-from-black-box-credit-modelsusing-complex-algorithms/. 92 Anthony Zaller. California Passes Law Regulating Quotas In Warehouses – What Employers Need to Know About AB 701. Zaller Law Group California Employment Law Report. Sept. 24, 2021. https://www.californiaemploymentlawreport.com/2021/09/california-passes-lawregulating-quotas-in-warehouses-what-employers-need-to-know-about-ab-701/.

236

White House Office of Science and Technology Policy (OSTP)

Across the Federal Government, Agencies Are Conducting and Supporting Research on Explainable AI Systems The NIST is conducting fundamental research on the explainability of AI systems. A multidisciplinary team of researchers aims to develop measurement methods and best practices to support the implementation of core tenets of explainable AI.93 The Defense Advanced Research Projects Agency has a program on Explainable Artificial Intelligence that aims to create a suite of machine learning techniques that produce more explainable models, while maintaining a high level of learning performance (prediction accuracy), and enable human users to understand, appropriately trust, and effectively manage the emerging generation of artificially intelligent partners.94 The National Science Foundation’s program on Fairness in Artificial Intelligence also includes a specific interest in research foundations for explainable AI.95

Human Alternatives, Consideration, and Fallback You Should Be Able to Opt out, Where Appropriate, and Have Access to a Person Who Can Quickly Consider and Remedy Problems You Encounter You should be able to opt out from automated systems in favor of a human alternative, where appropriate. Appropriateness should be determined based on reasonable expectations in a given context and with a focus on ensuring broad accessibility and protecting the public from especially harmful impacts. In some cases, a human or other alternative may be required by law. You should have access to timely human consideration and remedy by a fallback and escalation process if an automated system fails, it produces an error, or you would like to appeal or contest its impacts on you. Human consideration and fallback should be accessible, equitable, effective, maintained, National Institute of Standards and Technology. AI Fundamental Research – Explainability. Accessed Jun. 4, 2022. https://www.nist.gov/artificial-intelligence/ai-fundamental-research-explainability. 94 DARPA. Explainable Artificial Intelligence (XAI). Accessed July 20, 2022. https://www.darpa.mil/program/explainable-artificial-intelligence. 95 National Science Foundation. NSF Program on Fairness in Artificial Intelligence in Collaboration with Amazon (FAI). Accessed July 20, 2022. https://www.nsf.gov/pubs/2021/nsf21585/nsf21585.htm. 93

Blueprint for an AI Bill of Rights

237

accompanied by appropriate operator training, and should not impose an unreasonable burden on the public. Automated systems with an intended use within sensitive domains, including, but not limited to, criminal justice, employment, education, and health, should additionally be tailored to the purpose, provide meaningful access for oversight, include training for any people interacting with the system, and incorporate human consideration for adverse or high-risk decisions. Reporting that includes a description of these human governance processes and assessment of their timeliness, accessibility, outcomes, and effectiveness should be made public whenever possible.

Why This Principle Is Important This section provides a brief summary of the problems which the principle seeks to address and protect against, including illustrative examples.

There are many reasons people may prefer not to use an automated system: the system can be flawed and can lead to unintended outcomes; it may reinforce bias or be inaccessible; it may simply be inconvenient or unavailable; or it may replace a paper or manual process to which people had grown accustomed. Yet members of the public are often presented with no alternative, or are forced to endure a cumbersome process to reach a human decision-maker once they decide they no longer want to deal exclusively with the automated system or be impacted by its results. As a result of this lack of human reconsideration, many receive delayed access, or lose access, to rights, opportunities, benefits, and critical services. The American public deserves the assurance that, when rights, opportunities, or access are meaningfully at stake and there is a reasonable expectation of an alternative to an automated system, they can conveniently opt out of an automated system and will not be disadvantaged for that choice. In some cases, such a human or other alternative may be required by law, for example it could be required as “reasonable accommodations” for people with disabilities. In addition to being able to opt out and use a human alternative, the American public deserves a human fallback system in the event that an automated system fails or causes harm. No matter how rigorously an automated system is tested, there will always be situations for which the system fails. The American public deserves protection via human review against these outlying or unexpected scenarios. In the case of time-critical systems, the public should not have to wait—immediate human consideration

238

White House Office of Science and Technology Policy (OSTP)

and fallback should be available. In many time-critical systems, such a remedy is already immediately available, such as a building manager who can open a door in the case an automated card access system fails. In the criminal justice system, employment, education, healthcare, and other sensitive domains, automated systems are used for many purposes, from pre-trial risk assessments and parole decisions to technologies that help doctors diagnose disease. Absent appropriate safeguards, these technologies can lead to unfair, inaccurate, or dangerous outcomes. These sensitive domains require extra protections. It is critically important that there is extensive human oversight in such settings. These critical protections have been adopted in some scenarios. Where automated systems have been introduced to provide the public access to government benefits, existing human paper and phone-based processes are generally still in place, providing an important alternative to ensure access. Companies that have introduced automated call centers often retain the option of dialing zero to reach an operator. When automated identity controls are in place to board an airplane or enter the country, there is a person supervising the systems who can be turned to for help or to appeal a misidentification. The American people deserve the reassurance that such procedures are in place to protect their rights, opportunities, and access. People make mistakes, and a human alternative or fallback mechanism will not always have the right answer, but they serve as an important check on the power and validity of automated systems. 

96

97

An automated signature matching system is used as part of the voting process in many parts of the country to determine whether the signature on a mail-in ballot matches the signature on file. These signature matching systems are less likely to work correctly for some voters, including voters with mental or physical disabilities, voters with shorter or hyphenated names, and voters who have changed their name.96 A human curing process,97 which helps voters to confirm their signatures and correct other voting mistakes, is important to

Kyle Wiggers. Automatic signature verification software threatens to disenfranchise U.S. voters. VentureBeat. Oct. 25, 2020. https://venturebeat.com/2020/10/25/automaticsignature-verification-software-threatens-to-disenfranchise-u-s-voters/. Ballotpedia. Cure period for absentee and mail-in ballots. Article retrieved Apr 18, 2022. https://ballotpedia.org/Cure_period_for_absentee_and_mail-in_ballots.

Blueprint for an AI Bill of Rights









98

239

ensure all votes are counted,98 and it is already standard practice in much of the country for both an election official and the voter to have the opportunity to review and correct any such issues.99 An unemployment benefits system in Colorado required, as a condition of accessing benefits, that applicants have a smartphone in order to verify their identity. No alternative human option was readily available, which denied many people access to benefits.100 A fraud detection system for unemployment insurance distribution incorrectly flagged entries as fraudulent, leading to people with slight discrepancies or complexities in their files having their wages withheld and tax returns seized without any chance to explain themselves or receive a review by a person.101 A patient was wrongly denied access to pain medication when the hospital’s software confused her medication history with that of her dog’s. Even after she tracked down an explanation for the problem, doctors were afraid to override the system, and she was forced to go without pain relief due to the system’s error.102 A large corporation automated performance evaluation and other HR functions, leading to workers being fired by an automated system without the possibility of human review, appeal or other form of recourse.103

Larry Buchanan and Alicia Parlapiano. Two of these Mail Ballot Signatures are by the Same Person. Which Ones? New York Times. Oct. 7, 2020. https://www.nytimes.com/ interactive/2020/10/07/upshot/mail-voting-ballots-signature-matching.html. 99 Rachel Orey and Owen Bacskai. The Low Down on Ballot Curing. Nov. 04, 2020. https://bipartisanpolicy.org/blog/the-low-down-on-ballot-curing/. 100 Andrew Kenney. 'I'm shocked that they need to have a smartphone': System for unemployment benefits exposes digital divide. USA Today. May 2, 2021. https://www.usatoday. com/story/tech/news/2021/05/02/unemployment-benefits-system-leaving-peoplebehind/4915248001/. 101 Allie Gross. UIA lawsuit shows how the state criminalizes the unemployed. Detroit MetroTimes. Sep. 18, 2015. https://www.metrotimes.com/news/uia-lawsuit-shows-how-thestate-criminalizes-the-unemployed-2369412. 102 Maia Szalavitz. The Pain Was Unbearable. So Why Did Doctors Turn Her Away? Wired. Aug. 11, 2021. https://www.wired.com/story/opioid-drug-addiction-algorithm-chronic-pain/. 103 Spencer Soper. Fired by Bot at Amazon: "It's You Against the Machine." Bloomberg, Jun. 28, 2021. https://www.bloomberg.com/news/features/2021-06-28/fired-by-bot-amazon-turns-tomachine-managers-and-workers-are-losing-out.

240

White House Office of Science and Technology Policy (OSTP)

What Should Be Expected of Automated Systems The expectations for automated systems are meant to serve as a blueprint for the development of additional technical standards and practices that are tailored for particular sectors and contexts.

An automated system should provide demonstrably effective mechanisms to opt out in favor of a human alternative, where appropriate, as well as timely human consideration and remedy by a fallback system, with additional human oversight and safeguards for systems used in sensitive domains, and with training and assessment for any human-based portions of the system to ensure effectiveness.

Provide a Mechanism to Conveniently Opt out from Automated Systems in Favor of a Human Alternative, Where Appropriate Brief, Clear, Accessible Notice and Instructions Those impacted by an automated system should be given a brief, clear notice that they are entitled to opt-out, along with clear instructions for how to optout. Instructions should be provided in an accessible form and should be easily findable by those impacted by the automated system. The brevity, clarity, and accessibility of the notice and instructions should be assessed (e.g., via user experience research). Human Alternatives Provided When Appropriate In many scenarios, there is a reasonable expectation of human involvement in attaining rights, opportunities, or access. When automated systems make up part of the attainment process, alternative timely human-driven processes should be provided. The use of a human alternative should be triggered by an opt-out process. Timely and Not Burdensome Human Alternative Opting out should be timely and not unreasonably burdensome in both the process of requesting to opt-out and the human-driven alternative provided.

Blueprint for an AI Bill of Rights

241

Provide Timely Human Consideration and Remedy by a Fallback and Escalation System in the Event That an Automated System Fails, Produces Error, or You Would Like to Appeal or Contest Its Impacts on You Proportionate The availability of human consideration and fallback, along with associated training and safeguards against human bias, should be proportionate to the potential of the automated system to meaningfully impact rights, opportunities, or access. Automated systems that have greater control over outcomes, provide input to high-stakes decisions, relate to sensitive domains, or otherwise have a greater potential to meaningfully impact rights, opportunities, or access should have greater availability (e.g., staffing) and oversight of human consideration and fallback mechanisms. Accessible Mechanisms for human consideration and fallback, whether in-person, on paper, by phone, or otherwise provided, should be easy to find and use. These mechanisms should be tested to ensure that users who have trouble with the automated system are able to use human consideration and fallback, with the understanding that it may be these users who are most likely to need the human assistance. Similarly, it should be tested to ensure that users with disabilities are able to find and use human consideration and fallback and also request reasonable accommodations or modifications. Convenient Mechanisms for human consideration and fallback should not be unreasonably burdensome as compared to the automated system’s equivalent. Equitable Consideration should be given to ensuring outcomes of the fallback and escalation system are equitable when compared to those of the automated system and such that the fallback and escalation system provides equitable access to underserved communities.104 104

Definitions of ‘equity’ and ‘underserved communities’ can be found in the Definitions section of this document as well as in Executive Order on Advancing Racial Equity and Support for Underserved Communities Through the Federal Government: https://www.whitehouse. gov/briefing-room/presidential-actions/2021/01/20/executive-order-advancing-racialequity-and-support-for-underserved-communities-through-the-federal-government/.

242

White House Office of Science and Technology Policy (OSTP)

Timely Human consideration and fallback are only useful if they are conducted and concluded in a timely manner. The determination of what is timely should be made relative to the specific automated system, and the review system should be staffed and regularly assessed to ensure it is providing timely consideration and fallback. In time-critical systems, this mechanism should be immediately available or, where possible, available before the harm occurs. Time-critical systems include, but are not limited to, voting-related systems, automated building access and other access systems, systems that form a critical component of healthcare, and systems that have the ability to withhold wages or otherwise cause immediate financial penalties. Effective The organizational structure surrounding processes for consideration and fallback should be designed so that if the human decision-maker charged with reassessing a decision determines that it should be overruled, the new decision will be effectively enacted. This includes ensuring that the new decision is entered into the automated system throughout its components, any previous repercussions from the old decision are also overturned, and safeguards are put in place to help ensure that future decisions do not result in the same errors. Maintained The human consideration and fallback process and any associated automated processes should be maintained and supported as long as the relevant automated system continues to be in use. Institute Training, Assessment, and Oversight to Combat Automation Bias and Ensure any Human-Based Components of a System Are Effective Training and Assessment Anyone administering, interacting with, or interpreting the outputs of an automated system should receive training in that system, including how to properly interpret outputs of a system in light of its intended purpose and in how to mitigate the effects of automation bias. The training should reoccur regularly to ensure it is up to date with the system and to ensure the system is used appropriately. Assessment should be ongoing to ensure that the use of the system with human involvement provides for appropriate results, i.e., that

Blueprint for an AI Bill of Rights

243

the involvement of people does not invalidate the system's assessment as safe and effective or lead to algorithmic discrimination.

Oversight Human-based systems have the potential for bias, including automation bias, as well as other concerns that may limit their effectiveness. The results of assessments of the efficacy and potential bias of such human-based systems should be overseen by governance structures that have the potential to update the operation of the human-based system in order to mitigate these effects. Implement Additional Human Oversight and Safeguards for Automated Systems Related to Sensitive Domains Automated systems used within sensitive domains, including criminal justice, employment, education, and health, should meet the expectations laid out throughout this framework, especially avoiding capricious, inappropriate, and discriminatory impacts of these technologies. Additionally, automated systems used within sensitive domains should meet these expectations: Narrowly Scoped Data and Inferences Human oversight should ensure that automated systems in sensitive domains are narrowly scoped to address a defined goal, justifying each included data item or attribute as relevant to the specific use case. Data included should be carefully limited to avoid algorithmic discrimination resulting from, e.g., use of community characteristics, social network analysis, or group-based inferences. Tailored to the Situation Human oversight should ensure that automated systems in sensitive domains are tailored to the specific use case and real-world deployment scenario, and evaluation testing should show that the system is safe and effective for that specific situation. Validation testing performed based on one location or use case should not be assumed to transfer to another. Human Consideration before Any High-Risk Decision Automated systems, where they are used in sensitive domains, may play a role in directly providing information or otherwise providing positive outcomes to impacted people. However, automated systems should not be allowed to

244

White House Office of Science and Technology Policy (OSTP)

directly intervene in high-risk situations, such as sentencing decisions or medical care, without human consideration. Meaningful Access to Examine the System Designers, developers, and deployers of automated systems should consider limited waivers of confidentiality (including those related to trade secrets) where necessary in order to provide meaningful oversight of systems used in sensitive domains, incorporating measures to protect intellectual property and trade secrets from unwarranted disclosure as appropriate. This includes (potentially private and protected) meaningful access to source code, documentation, and related data during any associated legal discovery, subject to effective confidentiality or court orders. Such meaningful access should include (but is not limited to) adhering to the principle on Notice and Explanation using the highest level of risk so the system is designed with builtin explanations; such systems should use fully-transparent models where the model itself can be understood by people needing to directly examine it.

Demonstrate Access to Human Alternatives, Consideration, and Fallback Reporting Reporting should include an assessment of timeliness and the extent of additional burden for human alternatives, aggregate statistics about who chooses the human alternative, along with the results of the assessment about brevity, clarity, and accessibility of notice and opt-out instructions. Reporting on the accessibility, timeliness, and effectiveness of human consideration and fallback should be made public at regular intervals for as long as the system is in use. This should include aggregated information about the number and type of requests for consideration, fallback employed, and any repeated requests; the timeliness of the handling of these requests, including mean wait times for different types of requests as well as maximum wait times; and information about the procedures used to address requests for consideration along with the results of the evaluation of their accessibility. For systems used in sensitive domains, reporting should include information about training and governance procedures for these technologies. Reporting should also include documentation of goals and assessment of meeting those goals, consideration of data included, and documentation of the governance of reasonable access to the technology. Reporting should be provided in a clear and 51 machinereadable manner.

Blueprint for an AI Bill of Rights

245

How These Principles Can Move into Practice Real-life examples of how these principles can become reality, through laws, policies, and practical technical and sociotechnical approaches to protecting rights, opportunities, and access.

Healthcare “Navigators” Help People Find Their Way through Online Signup Forms to Choose and Obtain Healthcare A Navigator is “an individual or organization that's trained and able to help consumers, small businesses, and their employees as they look for health coverage options through the Marketplace (a government web site), including completing eligibility and enrollment forms.”105 For the 2022 plan year, the Biden-Harris Administration increased funding so that grantee organizations could “train and certify more than 1,500 Navigators to help uninsured consumers find affordable and comprehensive health coverage.”106 The Customer Service Industry Has Successfully Integrated Automated Services Such as Chat-Bots and AI-Driven Call Response Systems with Escalation to a Human Support Team107 Many businesses now use partially automated customer service platforms that help answer customer questions and compile common problems for human agents to review. These integrated human-AI systems allow companies to provide faster customer care while maintaining human agents to answer calls or otherwise respond to complicated requests. Using both AI and human agents is viewed as key to successful customer service.108

105

HealthCare.gov. Navigator - HealthCare.gov Glossary. Accessed May 2, 2022. https://www.healthcare.gov/glossary/navigator/. 106 Centers for Medicare & Medicaid Services. Biden-Harris Administration Quadruples the Number of Health Care Navigators Ahead of HealthCare.gov Open Enrollment Period. Aug. 27, 2021. https://www.cms.gov/newsroom/press-releases/biden-harris-administration -quadruples-number-health-care-navigators-ahead-healthcaregov-open. 107 See, e.g., McKinsey & Company. The State of Customer Care in 2022. July 8, 2022. https://www.mckinsey.com/business-functions/operations/our-insights/the-state-ofcustomer-care-in-2022; Sara Angeles. Customer Service Solutions for Small Businesses. Business News Daily. Jun. 29, 2022. https://www.businessnewsdaily.com/7575-customerservice-solutions.html. 108 Mike Hughes. Are We Getting The Best Out Of Our Bots? Co-Intelligence Between Robots & Humans. Forbes. Jul. 14, 2022. https://www.forbes.com/sites/mikehughes1/2022/07/ 14/are-we-getting-the-best-out-of-our-bots-co-intelligence-between-robots-humans/?sh= 16a2bd207395.

246

White House Office of Science and Technology Policy (OSTP)

Ballot Curing Laws in at Least 24 States Require a Fallback System That Allows Voters to Correct Their Ballot and Have It Counted in the Case That a Voter Signature Matching Algorithm Incorrectly Flags Their Ballot as Invalid or There Is Another Issue with Their Ballot, and Review by an Election Official Does Not Rectify the Problem. Some Federal Courts Have Found That Such Cure Procedures Are Constitutionally Required109 Ballot curing processes vary among states, and include direct phone calls, emails, or mail contact by election officials.110 Voters are asked to provide alternative information or a new signature to verify the validity of their ballot.

Appendix Examples of Automated Systems The below examples are meant to illustrate the breadth of automated systems that, insofar as they have the potential to meaningfully impact rights, opportunities, or access to critical resources or services, should be covered by the Blueprint for an AI Bill of Rights. These examples should not be construed to limit that scope, which includes automated systems that may not yet exist, but which fall under these criteria. Examples of automated systems for which the Blueprint for an AI Bill of Rights should be considered include those that have the potential to meaningfully impact: 

109

Civil rights, civil liberties, or privacy, including but not limited to: Speech-related systems such as automated content moderation tools;  Surveillance and criminal justice system algorithms such as risk assessments, predictive policing, automated license plate readers, real-time facial recognition systems (especially those used in

Rachel Orey and Owen Bacskai. The Low Down on Ballot Curing. Nov. 04, 2020. https:// bipartisanpolicy.org/blog/the-low-down-on-ballot-curing/; Zahavah Levine and Thea Raymond-Seidel. Mail Voting Litigation in 2020, Part IV: Verifying Mail Ballots. Oct. 29, 2020. https://www.lawfareblog.com/mail-voting-litigation-2020-part-iv-verifying-mailballots. 110 National Conference of State Legislatures. Table 15: States With Signature Cure Processes. Jan. 18, 2022. https://www.ncsl.org/research/elections-and-campaigns/vopp-table-15states-that-permit-voters-to-correct-signature-discrepancies.aspx.

Blueprint for an AI Bill of Rights





247

public places or during protected activities like peaceful protests), social media monitoring, and ankle monitoring devices;  Voting-related systems such as signature matching tools;  Systems with a potential privacy impact such as smart home systems and associated data, systems that use or collect healthrelated data, systems that use or collect education-related data, criminal justice system data, ad-targeting systems, and systems that perform big data analytics in order to build profiles or infer personal information about individuals; and  Any system that has the meaningful potential to lead to algorithmic discrimination. Equal opportunities, including but not limited to:  Education-related systems such as algorithms that purport to detect student cheating or plagiarism, admissions algorithms, online or virtual reality student monitoring systems, projections of student progress or outcomes, algorithms that determine access to resources or rograms, and surveillance of classes (whether online or in-person);  Housing-related systems such as tenant screening algorithms, automated valuation systems that estimate the value of homes used in mortgage underwriting or home insurance, and automated valuations from online aggregator websites; and  Employment-related systems such as workplace algorithms that inform all aspects of the terms and conditions of employment including, but not limited to, pay or promotion, hiring or termination algorithms, virtual or augmented reality workplace training programs, and electronic work place surveillance and management systems. Access to critical resources and services, including but not limited to:  Health and health insurance technologies such as medical AI systems and devices, AI-assisted diagnostic tools, algorithms or predictive models used to support clinical decision making, medical or insurance health risk assessments, drug addiction risk assessments and associated access algorithms, wearable technologies, wellness apps, insurance care allocation algorithms, and health insurance cost and underwriting algorithms;

248

White House Office of Science and Technology Policy (OSTP)







Financial system algorithms such as loan allocation algorithms, financial system access determination algorithms, credit scoring systems, insurance algorithms including risk assessments, automated interest rate determinations, and financial algorithms that apply penalties (e.g., that can garnish wages or withhold tax returns); Systems that impact the safety of communities such as automated traffic control systems, elecctrical grid controls, smart city technologies, and industrial emissions and environmental impact control algorithms; and Systems related to access to benefits or services or assignment of penalties such as systems that support decision-makers who adjudicate benefits such as collating or analyzing information or matching records, systems which similarly assist in the adjudication of administrative or criminal penalties, fraud detection algorithms, services or benefits access control algorithms, biometric systems used as access control, and systems which make benefits or services related decisions on a fully or partially autonomous basis (such as a determination to revoke benefits).

Listening to the American People The White House Office of Science and Technology Policy (OSTP) led a yearlong process to seek and distill input from people across the country – from impacted communities to industry stakeholders to technology developers to other experts across fields and sectors, as well as policymakers across the Federal government – on the issue of algorithmic and data-driven harms and potential remedies. Through panel discussions, public listening sessions, private meetings, a formal request for information, and input to a publicly accessible and widely-publicized email address, people across the United States spoke up about both the promises and potential harms of these technologies, and played a central role in shaping the Blueprint for an AI Bill of Rights.

Blueprint for an AI Bill of Rights

249

Panel Discussions to Inform the Blueprint for an AI Bill of Rights OSTP co-hosted a series of six panel discussions in collaboration with the Center for American Progress, the Joint Center for Political and Economic Studies, New America, the German Marshall Fund, the Electronic Privacy Information Center, and the Mozilla Foundation. The purpose of these convenings – recordings of which are publicly available online111 – was to bring together a variety of experts, practitioners, advocates and federal government officials to offer insights and analysis on the risks, harms, benefits, and policy opportunities of automated systems. Each panel discussion was organized around a wide-ranging theme, exploring current challenges and concerns and considering what an automated society that respects democratic values should look like. These discussions focused on the topics of consumer rights and protections, the criminal justice system, equal opportunities and civil justice, artificial intelligence and democratic values, social welfare and development, and the healthcare system.

Summaries of Panel Discussions Panel 1: Consumer Rights and Protections This event explored the opportunities and challenges for individual consumers and communities in the context of a growing ecosystem of AI-enabled consumer products, advanced platforms and services, “Internet of Things” (IoT) devices, and smart city products and services. Welcome  Rashida Richardson, Senior Policy Advisor for Data and Democracy, White House Office of Science and Technology Policy  Karen Kornbluh, Senior Fellow and Director of the Digital Innovation and Democracy Initiative, German Marshall Fund Moderator Devin E. Willis, Attorney, Division of Privacy and Identity Protection, Bureau of Consumer Protection, Federal Trade Commission. 111

White House Office of Science and Technology Policy. Join the Effort to Create A Bill of Rights for an Automated Society. Nov. 10, 2021. https://www.whitehouse.gov/ostp/newsupdates/2021/11/10/join-the-effort-to-create-a-bill-of-rights-for-an-automated-society/.

250

White House Office of Science and Technology Policy (OSTP)

Panelists  Tamika L. Butler, Principal, Tamika L. Butler Consulting  Jennifer Clark, Professor and Head of City and Regional Planning, Knowlton School of Engineering, Ohio State University  Carl Holshouser, Senior Vice President for Operations and Strategic Initiatives, TechNet  Surya Mattu, Senior Data Engineer and Investigative Data Journalist, The Markup  Mariah Montgomery, National Campaign Director, Partnership for Working Families Panelists discussed the benefits of AI-enabled systems and their potential to build better and more innovative infrastructure. They individually noted that while AI technologies may be new, the process of technological diffusion is not, and that it was critical to have thoughtful and responsible development and integration of technology within communities. Some panelists suggested that the integration of technology could benefit from examining how technological diffusion has worked in the realm of urban planning: lessons learned from successes and failures there include the importance of balancing ownership rights, use rights, and community health, safety and welfare, as well ensuring better representation of all voices, especially those traditionally marginalized by technological advances. Some panelists also raised the issue of power structures – providing examples of how strong transparency requirements in smart city projects helped to reshape power and give more voice to those lacking the financial or political power to effect change. In discussion of technical and governance interventions that that are needed to protect against the harms of these technologies, various panelists emphasized the need for transparency, data collection, and flexible and reactive policy development, analogous to how software is continuously updated and deployed. Some panelists pointed out that companies need clear guidelines to have a consistent environment for innovation, with principles and guardrails being the key to fostering responsible innovation.

Panel 2: The Criminal Justice System This event explored current and emergent uses of technology in the criminal justice system and considered how they advance or undermine public safety, justice, and democratic values.

Blueprint for an AI Bill of Rights

251

Welcome  Suresh Venkatasubramanian, Assistant Director for Science and Justice, White House Office of Science and Technology Policy  Ben Winters, Counsel, Electronic Privacy Information Center Moderator Chiraag Bains, Deputy Assistant to the President on Racial Justice & Equity Panelists  Sean Malinowski, Director of Policing Innovation and Reform, University of Chicago Crime Lab  Kristian Lum, Researcher  Jumana Musa, Director, Fourth Amendment Center, National Association of Criminal Defense Lawyers  Stanley Andrisse, Executive Director, From Prison Cells to PHD; Assistant Professor, Howard University College of Medicine  Myaisha Hayes, Campaign Strategies Director, MediaJustice Panelists discussed uses of technology within the criminal justice system, including the use of predictive policing, pretrial risk assessments, automated license plate readers, and prison communication tools. The discussion emphasized that communities deserve safety, and strategies need to be identified that lead to safety; such strategies might include data-driven approaches, but the focus on safety should be primary, and technology may or may not be part of an effective set of mechanisms to achieve safety. Various panelists raised concerns about the validity of these systems, the tendency of adverse or irrelevant data to lead to a replication of unjust outcomes, and the confirmation bias and tendency of people to defer to potentially inaccurate automated systems. Throughout, many of the panelists individually emphasized that the impact of these systems on individuals and communities is potentially severe: the systems lack individualization and work against the belief that people can change for the better, system use can lead to the loss of jobs and custody of children, and surveillance can lead to chilling effects for communities and sends negative signals to community members about how they're viewed. In discussion of technical and governance interventions that that are needed to protect against the harms of these technologies, various panelists emphasized that transparency is important but is not enough to achieve

252

White House Office of Science and Technology Policy (OSTP)

accountability. Some panelists discussed their individual views on additional system needs for validity, and agreed upon the importance of advisory boards and compensated community input early in the design process (before the technology is built and instituted). Various panelists also emphasized the importance of regulation that includes limits to the type and cost of such technologies.

Panel 3: Equal Opportunities and Civil Justice This event explored current and emerging uses of technology that impact equity of opportunity in employment, education, and housing. Welcome  Rashida Richardson, Senior Policy Advisor for Data and Democracy, White House Office of Science and Technology Policy  Dominique Harrison, Director for Technology Policy, The Joint Center for Political and Economic Studies Moderator Jenny Yang, Director, Office of Federal Contract Compliance Programs, Department of Labor. Panelists  Christo Wilson, Associate Professor of Computer Science, Northeastern University  Frida Polli, CEO, Pymetrics  Karen Levy, Assistant Professor, Department of Information Science, Cornell University  Natasha Duarte, Project Director, Upturn  Elana Zeide, Assistant Professor, University of Nebraska College of Law  Fabian Rogers, Constituent Advocate, Office of NY State Senator Jabari Brisport and Community Advocate and Floor Captain, Atlantic Plaza Towers Tenants Association The individual panelists described the ways in which AI systems and other technologies are increasingly being used to limit access to equal opportunities in education, housing, and employment. Education-related concerning uses included the increased use of remote proctoring systems, student location and

Blueprint for an AI Bill of Rights

253

facial recognition tracking, teacher evaluation systems, robot teachers, and more. Housing-related concerning uses including automated tenant background screening and facial recognition-based controls to enter or exit housing complexes. Employment-related concerning uses included discrimination in automated hiring screening and workplace surveillance. Various panelists raised the limitations of existing privacy law as a key concern, pointing out that students should be able to reinvent themselves and require privacy of their student records and education-related data in order to do so. The overarching concerns of surveillance in these domains included concerns about the chilling effects of surveillance on student expression, inappropriate control of tenants via surveillance, and the way that surveillance of workers blurs the boundary between work and life and exerts extreme and potentially damaging control over workers' lives. Additionally, some panelists pointed out ways that data from one situation was misapplied in another in a way that limited people's opportunities, for example data from criminal justice settings or previous evictions being used to block further access to housing. Throughout, various panelists emphasized that these technologies are being used to shift the burden of oversight and efficiency from employers to workers, schools to students, and landlords to tenants, in ways that diminish and encroach on equality of opportunity; assessment of these technologies should include whether they are genuinely helpful in solving an identified problem. In discussion of technical and governance interventions that that are needed to protect against the harms of these technologies, panelists individually described the importance of: receiving community input into the design and use of technologies, public reporting on crucial elements of these systems, better notice and consent procedures that ensure privacy based on context and use case, ability to opt-out of using these systems and receive a fallback to a human process, providing explanations of decisions and how these systems work, the need for governance including training in using these systems, ensuring the technological use cases are genuinely related to the goal task and are locally validated to work, and the need for institution and protection of third party audits to ensure systems continue to be accountable and valid.

Panel 4: Artificial Intelligence and Democratic Values This event examined challenges and opportunities in the design of technology that can help support a democratic vision for AI. It included discussion of the technical aspects of designing non-discriminatory technology, explainable AI,

254

White House Office of Science and Technology Policy (OSTP)

human-computer interaction with an emphasis on community participation, and privacy-aware design.

Welcome  Sorelle Friedler, Assistant Director for Data and Democracy, White House Office of Science and Technology Policy  J. Bob Alotta, Vice President for Global Programs, Mozilla Foundation  Navrina Singh, Board Member, Mozilla Foundation Moderator Kathy Pham Evans, Deputy Chief Technology Officer for Product and Engineering, U.S Federal Trade Commission. Panelists  Liz O’Sullivan, CEO, Parity AI  Timnit Gebru, Independent Scholar  Jennifer Wortman Vaughan, Senior Principal Researcher, Microsoft Research, New York City  Pamela Wisniewski, Associate Professor of Computer Science, University of Central Florida; Director, Socio-technical Interaction Research (STIR) Lab  Seny Kamara, Associate Professor of Computer Science, Brown University Each panelist individually emphasized the risks of using AI in high-stakes settings, including the potential for biased data and discriminatory outcomes, opaque decision-making processes, and lack of public trust and understanding of the algorithmic systems. The interventions and key needs various panelists put forward as necessary to the future design of critical AI systems included ongoing transparency, value sensitive and participatory design, explanations designed for relevant stakeholders, and public consultation. Various panelists emphasized the importance of placing trust in people, not technologies, and in engaging with impacted communities to understand the potential harms of technologies and build protection by design into future systems.

Blueprint for an AI Bill of Rights

255

Panel 5: Social Welfare and Development This event explored current and emerging uses of technology to implement or improve social welfare systems, social development programs, and other systems that can impact life chances. Welcome  Suresh Venkatasubramanian, Assistant Director for Science and Justice, White House Office of Science and Technology Policy  Anne-Marie Slaughter, CEO, New America Moderator Michele Evermore, Deputy Director for Policy, Office of Unemployment Insurance Modernization, Office of the Secretary, Department of Labor Panelists  Blake Hall, CEO and Founder, ID.Me  Karrie Karahalios, Professor of Computer Science, University of Illinois, Urbana-Champaign  Christiaan van Veen, Director of Digital Welfare State and Human Rights Project, NYU School of Law's Center for Human Rights and Global Justice  Julia Simon-Mishel, Supervising Attorney, Philadelphia Legal Assistance  Dr. Zachary Mahafza, Research & Data Analyst, Southern Poverty Law Center  J. Khadijah Abdurahman, Tech Impact Network Research Fellow, AI Now Institute, UCLA C2I1, and UWA Law School Panelists separately described the increasing scope of technology use in providing for social welfare, including in fraud detection, digital ID systems, and other methods focused on improving efficiency and reducing cost. However, various panelists individually cautioned that these systems may reduce burden for government agencies by increasing the burden and agency of people using and interacting with these technologies. Additionally, these systems can produce feedback loops and compounded harm, collecting data from communities and using it to reinforce inequality. Various panelists suggested that these harms could be mitigated by ensuring community input at the beginning of the design process, providing ways to opt out of these

256

White House Office of Science and Technology Policy (OSTP)

systems and use associated human-driven mechanisms instead, ensuring timeliness of benefit payments, and providing clear notice about the use of these systems and clear explanations of how and what the technologies are doing. Some panelists suggested that technology should be used to help people receive benefits, e.g., by pushing benefits to those in need and ensuring automated decision-making systems are only used to provide a positive outcome; technology shouldn't be used to take supports away from people who need them.

Panel 6: The Healthcare System This event explored current and emerging uses of technology in the healthcare system and consumer products related to health. Welcome  Alondra Nelson, Deputy Director for Science and Society, White House Office of Science and Technology Policy  Patrick Gaspard, President and CEO, Center for American Progress Moderator Micky Tripathi, National Coordinator for Health Information Technology, U.S Department of Health and Human Services. Panelists  Mark Schneider, Health Innovation Advisor, ChristianaCare  Ziad Obermeyer, Blue Cross of California Distinguished Associate Professor of Policy and Management, University of California, Berkeley School of Public Health  Dorothy Roberts, George A. Weiss University Professor of Law and Sociology and the Raymond Pace and Sadie Tanner Mossell Alexander Professor of Civil Rights, University of Pennsylvania  David Jones, A. Bernard Ackerman Professor of the Culture of Medicine, Harvard University  Jamila Michener, Associate Professor of Government, Cornell University; Co-Director, Cornell Center for Health Equity Panelists discussed the impact of new technologies on health disparities; healthcare access, delivery, and outcomes; and areas ripe for research and policymaking. Panelists discussed the increasing importance of technology as

Blueprint for an AI Bill of Rights

257

both a vehicle to deliver healthcare and a tool to enhance the quality of care. On the issue of delivery, various panelists pointed to a number of concerns including access to and expense of broadband service, the privacy concerns associated with telehealth systems, the expense associated with health monitoring devices, and how this can exacerbate equity issues. On the issue of technology enhanced care, some panelists spoke extensively about the way in which racial biases and the use of race in medicine perpetuate harms and embed prior discrimination, and the importance of ensuring that the technologies used in medical care were accountable to the relevant stakeholders. Various panelists emphasized the importance of having the voices of those subjected to these technologies be heard. Summaries of Additional Engagements: 



OSTP created an email address ([email protected]) to solicit comments from the public on the use of artificial intelligence and other data-driven technologies in their lives. OSTP issued a Request For Information (RFI) on the use and governance of biometric technologies.112 The purpose of this RFI was to understand the extent and variety of biometric technologies in past, current, or planned use; the domains in which these technologies are being used; the entities making use of them; current principles, practices, or policies governing their use; and the stakeholders that are, or may be, impacted by their use or regulation. The 130 responses to this RFI are available in full online113 and were submitted by the below listed organizations and individuals:

Accenture Access Now ACT | The App Association AHIP AIethicist.org Airlines for America 112

White House Office of Science and Technology Policy. Notice of Request for Information (RFI) on Public and Private Sector Uses of Biometric Technologies. Issued Oct. 8, 2021. https://www.federalregister.gov/documents/2021/10/08/2021-21975/notice-of-request-forinformation-rfi-on-public-and-private-sector-uses-of-biometric-technologies. 113 National Artificial Intelligence Initiative Office. Public Input on Public and Private Sector Uses of Biometric Technologies. Accessed Apr. 19, 2022. https://www.ai.gov/86-fr-56300responses/.

258

White House Office of Science and Technology Policy (OSTP)

Alliance for Automotive Innovation Amelia Winger-Bearskin American Civil Liberties Union American Civil Liberties Union of Massachusetts American Medical Association ARTICLE19 Attorneys General of the District of Columbia, Illinois, Maryland, Michigan, Minnesota, New York, North Carolina, Oregon, Vermont, and Washington Avanade Aware Barbara Evans Better Identity Coalition Bipartisan Policy Center Brandon L. Garrett and Cynthia Rudin Brian Krupp Brooklyn Defender Services BSA | The Software Alliance Carnegie Mellon University Center for Democracy & Technology Center for New Democratic Processes Center for Research and Education on Accessible Technology and Experiences at University of Washington, Devva Kasnitz, L Jean Camp, Jonathan Lazar, Harry Hochheiser Center on Privacy & Technology at Georgetown Law Cisco Systems City of Portland Smart City PDX Program CLEAR Clearview AI Cognoa Color of Change Common Sense Media Computing Community Consortium at Computing Research Association Connected Health Initiative Consumer Technology Association Courtney Radsch Coworker Cyber Farm Labs Data & Society Research Institute Data for Black Lives Data to Actionable Knowledge Lab at Harvard University

Blueprint for an AI Bill of Rights

259

Deloitte Dev Technology Group Digital Therapeutics Alliance Digital Welfare State & Human Rights Project and Center for Human Rights and Global Justice at New York University School of Law, and Temple University Institute for Law, Innovation & Technology Dignari Douglas Goddard Edgar Dworsky Electronic Frontier Foundation Electronic Privacy Information Center, Center for Digital Democracy, and Consumer Federation of America FaceTec Fight for the Future Ganesh Mani Georgia Tech Research Institute Google Health Information Technology Research and Development Interagency Working Group HireVue HR Policy Association ID.me Identity and Data Sciences Laboratory at Science Applications International Corporation Information Technology and Innovation Foundation Information Technology Industry Council Innocence Project Institute for Human-Centered Artificial Intelligence at Stanford University Integrated Justice Information Systems Institute International Association of Chiefs of Police International Biometrics + Identity Association International Business Machines Corporation International Committee of the Red Cross Inventionphysics iProov Jacob Boudreau Jennifer K. Wagner, Dan Berger, Margaret Hu, and Sara Katsanis

260

White House Office of Science and Technology Policy (OSTP)

Jonathan Barry-Blocker Joseph Turow Joy Buolamwini Joy Mack Karen Bureau Lamont Gholston Lawyers’ Committee for Civil Rights Under Law Lisa Feldman Barrett Madeline Owens Marsha Tudor Microsoft Corporation MITRE Corporation National Association for the Advancement of Colored People Legal Defense and Educational Fund National Association of Criminal Defense Lawyers National Center for Missing & Exploited Children National Fair Housing Alliance National Immigration Law Center NEC Corporation of America New America’s Open Technology Institute New York Civil Liberties Union No Name Provided Notre Dame Technology Ethics Center Office of the Ohio Public Defender Onfido Oosto Orissa Rose Palantir Pangiam Parity Technologies Patrick A. Stewart, Jeffrey K. Mullins, and Thomas J. Greitens Pel Abbott Philadelphia Unemployment Project Project On Government Oversight Recording Industry Association of America Robert Wilkens Ron Hedges Science, Technology, and Public Policy Program at University of Michigan Ann Arbor Security Industry Association Sheila Dean

Blueprint for an AI Bill of Rights

261

Software & Information Industry Association Stephanie Dinkins and the Future Histories Studio at Stony Brook University TechNet The Alliance for Media Arts and Culture, MIT Open Documentary Lab and Co-Creation Studio, and Immerse The International Brotherhood of Teamsters The Leadership Conference on Civil and Human Rights Thorn U.S. Chamber of Commerce’s Technology Engagement Center Uber Technologies University of Pittsburgh Undergraduate Student Collaborative Upturn US Technology Policy Committee of the Association of Computing Machinery Virginia Puccio Visar Berisha and Julie Liss XR Association XR Safety Initiative 



114

As an additional effort to reach out to stakeholders regarding the RFI, OSTP conducted two listening sessions for members of the public. The listening sessions together drew upwards of 300 participants. The Science and Technology Policy Institute produced a synopsis of both the RFI submissions and the feedback at the listening sessions.114 OSTP conducted meetings with a variety of stakeholders in the private sector and civil society. Some of these meetings were specifically focused on providing ideas related to the development of the Blueprint for an AI Bill of Rights while others provided useful general context on the positive use cases, potential harms, and/or oversight possibilities for these technologies. Participants in these conversations from the private sector and civil society included:

Thomas D. Olszewski, Lisa M. Van Pay, Javier F. Ortiz, Sarah E. Swiersz, and Laurie A. Dacus. Synopsis of Responses to OSTP’s Request for Information on the Use and Governance of Biometric Technologies in the Public and Private Sectors. Science and Technology Policy Institute. Mar. 2022. https://www.ida.org/-/media/feature/ publications/s/sy/synopsis-of-responses-to-request-for-information-on-the-use-andgovernance-of-biometric-technologies/ida-document-d-33070.ashx.

262

White House Office of Science and Technology Policy (OSTP)

Adobe American Civil Liberties Union (ACLU) The Aspen Commission on Information Disorder The Awood Center The Australian Human Rights Commission Biometrics Institute The Brookings Institute BSA | The Software Alliance Cantellus Group Center for American Progress Center for Democracy and Technology Center on Privacy and Technology at Georgetown Law Christiana Care Color of Change Coworker Data Robot Data Trust Alliance Data and Society Research Institute Deepmind EdSAFE AI Alliance Electronic Privacy Information Center (EPIC) Encode Justice Equal AI Google Hitachi's AI Policy Committee The Innocence Project Institute of Electrical and Electronics Engineers (IEEE) Intuit Lawyers Committee for Civil Rights Under Law Legal Aid Society The Leadership Conference on Civil and Human Rights Meta Microsoft The MIT AI Policy Forum Movement Alliance Project The National Association of Criminal Defense Lawyers O’Neil Risk Consulting & Algorithmic Auditing The Partnership on AI Pinterest The Plaintext Group pymetrics SAP The Security Industry Association Software and Information Industry Association (SIIA) Special Competitive Studies Project Thorn

Blueprint for an AI Bill of Rights

United for Respect University of California at Berkeley Citris Policy Lab University of California at Berkeley Labor Center Unfinished/Project Liberty Upturn US Chamber of Commerce US Chamber of Commerce Technology Engagement Center A.I. Working Group Vibrent Health Warehouse Worker Resource Center Waymap

263

Index

107, 108, 129, 132, 136, 138, 173, 190, 198, 199, 201, 202, 203, 204, 205, 206, 209, 211, 212, 213, 237, 241, 242, 243, 251

A access, 19, 20, 21, 24, 29, 36, 49, 50, 61, 63, 68, 83, 100, 108, 110, 114, 115, 121, 124, 127, 130, 157, 158, 173, 174, 177, 179, 180, 181, 183, 184, 185, 187, 192, 193, 194, 195, 204, 209, 210, 211, 212, 213, 216, 217, 218, 219, 220, 224, 225, 226, 230, 234, 236, 237, 238, 239, 240, 241, 242, 244, 245, 246, 247, 248, 252, 256, 257 AI Index, 12, 13, 14, 15, 16, 33, 34, 45 AI standards, 28, 32, 36, 54, 55, 56, 57, 58, 85, 87, 95, 97, 146 AI technologies, vii, 1, 2, 3, 4, 5, 6, 10, 11, 16, 18, 22, 23, 24, 25, 26, 28, 29, 32, 33, 37, 40, 41, 42, 43, 48, 51, 54, 60, 78, 79, 80, 83, 87, 88, 93, 97, 111, 134, 198, 250 algorithmic, 2, 15, 29, 41, 59, 62, 63, 67, 70, 117, 126, 129, 130, 132, 141, 147, 175, 176, 183, 199, 200, 201, 205, 206, 208, 209, 210, 211, 218, 230, 231, 235, 243, 247, 248, 254, 262 alternative regulatory pathways, 115 American AI Initiative (E.O. 13859), 1, 26, 28, 35, 54 automated systems, v, 173, 174, 175, 176, 177, 178, 179, 180, 182, 183, 186, 188, 189, 190, 191, 193, 195, 199, 200, 201, 205, 206, 209, 210, 212, 213, 215, 216, 217, 219, 220, 221, 224, 225, 228, 229, 230, 231, 232, 235, 236, 238, 240, 243, 244, 246, 249, 251

C China, 12, 14, 37, 38, 50, 51, 56, 57, 109, 111, 145, 151 CHIPS and Science Act of 2022, 74 computational costs, 71 Congress, 2, 3, 6, 17, 21, 25, 30, 33, 34, 35, 38, 39, 40, 41, 49, 50, 51, 54, 56, 57, 58, 61, 65, 72, 73, 74, 77, 78, 79, 81, 82, 95, 97, 110, 114, 115, 117, 120, 121, 122, 141, 147, 148, 161, 162, 197 cybersecurity, 15, 51, 68, 71, 73, 78, 80, 85, 86, 88, 90, 91, 92, 121, 142, 156, 183, 199

D data access, 17, 18, 209, 219, 220 deep learning (DL), 7, 8, 17, 23, 24, 25, 63, 71 deepfakes, 2, 6, 7, 38, 39 Defense Advanced Research Projects Agency (DARPA), 10, 11, 15, 18, 19, 32, 33, 169, 236 Department of Commerce, 72, 73, 84, 86, 96, 118, 119 Department of Defense (DOD), 38, 198 Department of Energy (DOE), 15, 23, 33, 37, 38, 54, 72, 198

B

E

bias, vii, 2, 3, 8, 35, 36, 37, 39, 40, 55, 58, 59, 60, 61, 62, 63, 66, 68, 69, 76, 78, 82, 83, 85, 86, 89, 90, 92, 102, 103, 106,

email spam filtering, 5 ethics, 3, 4, 36, 40, 46, 55, 58, 60, 63, 74, 75, 77, 78, 79, 80, 83, 107, 122, 155,

266 156, 157, 159, 160, 171, 182, 187, 188, 193, 194, 197, 198, 203, 225, 260 executive orders, 1, 18, 20, 25, 26, 174, 182, 196, 205, 226, 241 explainable AI, 11, 17, 18, 19, 64, 73, 199, 236, 253

F facial recognition, 2, 38, 39, 56, 61, 69, 129, 130, 151, 201, 215, 227, 246, 253 fairness, 8, 17, 29, 40, 55, 58, 59, 60, 61, 64, 68, 79, 80, 89, 90, 111, 113, 115, 126, 128, 129, 130, 138, 139, 141, 147, 174, 198, 234, 236 fallback, 179, 192, 208, 236, 237, 238, 240, 241, 242, 244, 246, 253 federal AI investments, 2 financial lending decisions, 5

G generative adversarial networks (GANs), 2, 6, 7, 8, 37 government action, 72

H hardware, 4, 8, 17, 23, 93, 192, 211 high performance computing (HPC), 23

I international, 3, 4, 26, 27, 29, 30, 32, 38, 51, 52, 53, 54, 55, 56, 57, 58, 74, 89, 96, 98, 109, 112, 136, 140, 142, 145, 146, 147, 152, 175, 211, 259, 261

L legislation, 2, 4, 6, 25, 33, 35, 37, 41, 72, 79, 81, 84, 114, 117, 119, 120, 122, 143, 181, 199, 234

Index

M machine learning (ML), vii, 1, 4, 6, 7, 8, 9, 10, 13, 15, 23, 24, 26, 27, 30, 31, 32, 34, 35, 38, 40, 43, 44, 46, 48, 59, 61, 67, 68, 70, 90, 91, 93, 96, 98, 100, 102, 103, 106, 107, 126, 129, 130, 131, 140, 143, 146, 148, 150, 152, 156, 159, 183, 192, 195, 199, 216, 220, 234,236

N National AI Advisory Committee (NAIAC), 35, 36, 72, 84, 85, 87, 96, 113, 123, 133, 134, 152 National AI Initiative Office (NAIIO), 25, 35, 84, 85, 96, 120, 134 National Artificial Intelligence Initiative Act (NAIIA), 2, 21, 25, 41, 72, 73, 74, 79, 81, 82, 120 National Institute of Standards and Technology (NIST), 4, 18, 28, 32, 36, 37, 54, 55, 56, 57, 58, 63, 66, 67, 69, 70, 71, 73, 74, 75, 77, 78, 79, 81, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 102, 106, 107, 110, 112, 115, 116, 117, 118, 119, 121, 135, 136, 138, 139, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 154, 155, 160, 162, 197, 212, 213, 227, 236 National Science and Technology Council (NSTC), 10, 20, 26, 27, 28, 29, 35, 54, 57, 60, 96 National Science Foundation (NSF), 15, 21, 32, 36, 37, 52, 53, 54, 72, 74, 83, 92, 96, 102, 107, 110, 120, 121, 144, 153, 199, 236 natural language processing (NLP), vii, 1, 4, 5, 24, 31, 46, 61, 68, 71, 93 Networking and Information Technology Research and Development (NITRD), 15, 27, 28, 96

Index

O Office of Science and Technology Policy (OSTP), 3, 9, 21, 25, 26, 28, 30, 35, 72, 73, 96, 110, 121, 249, 257, 261 opportunities, vii, 2, 18, 32, 40, 63, 76, 85, 88, 107, 111, 113, 114, 115, 116, 134, 143, 150, 173, 174, 178, 179, 180, 181, 183, 184, 185, 187, 192, 193, 195, 197, 198, 199, 209, 213, 217, 221, 224, 225, 226, 229, 230, 234, 237, 238, 240, 241, 245, 246, 247, 249, 252, 253

P privacy, 20, 24, 38, 39, 55, 56, 61, 68, 71, 73, 76, 78, 79, 80, 81, 83, 85, 86, 88, 89, 92, 96, 97, 107, 110, 111, 114, 115, 117, 122, 123, 125, 137, 138, 143, 147, 148, 149, 156, 158, 173, 174, 175, 177, 180, 181, 182, 184, 185, 187, 189, 191, 194, 198, 199, 207, 209, 212, 213, 214, 216, 217, 218, 220, 221, 222, 224, 225, 226, 227, 228, 231, 234, 246, 247, 249, 251, 253, 254, 257, 258, 259, 262 private funding, 14, 54 private sector, vii, 2, 21, 24, 28, 31, 40, 46, 51, 54, 55, 63, 73, 75, 87, 95, 97, 109, 111, 153, 158, 175, 180, 181, 184, 191, 194, 196, 234, 257, 261 public funding, 14, 15

R reinforcement learning (RL), 7, 8, 9, 67 research and development (R & D), vii, 2, 3, 10, 11, 12, 15, 20, 23, 25, 26, 27, 28, 29, 32, 33, 35, 36, 37, 39, 41, 45, 51, 52, 53, 54, 58, 60, 64, 88, 96, 206 rights, v, 39, 56, 61, 72, 85, 87, 97, 111, 142, 169, 173, 174, 175, 176, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 191, 192, 193, 195, 196, 199, 201, 204, 209, 210, 213, 215, 217, 218, 221, 222, 225, 226, 227, 230, 234, 237, 238, 240,

267 241, 245, 246,248, 249, 250, 255, 256, 259, 260, 261, 262 risk(s), v, 24, 29, 39, 59, 61, 65, 66, 68, 71, 72, 73, 75, 76, 77, 78, 79, 80, 81, 82, 84, 85, 86, 88, 91, 92, 94, 103, 110, 111, 112, 113, 118, 119, 121, 133, 137, 138, 143, 176, 187, 191, 192, 194, 197, 214, 216, 217, 227, 235, 249, 254 robotics, vii, 1, 4, 6, 10, 13, 25, 44, 45, 93, 134

S safety, vii, 2, 3, 17, 20, 28, 36, 40, 55, 68, 70, 71, 73, 76, 77, 79, 80, 111, 115, 158, 176, 180, 181, 184, 188, 189, 190, 191, 192, 194, 196, 197, 198, 199, 206, 228, 229, 234, 248, 250, 251, 261 search engine results, 5 stakeholder driven engagement, 116 standards development, 54, 55, 57, 73, 83, 95 supervised learning, 7, 67

T training, 3, 7, 8, 11, 13, 17, 18, 20, 21, 22, 26, 28, 35, 39, 47, 48, 49, 51, 59, 62, 64, 67, 69, 70, 71, 76, 78, 82, 105, 121, 138, 179, 211, 216, 220, 237, 240, 241, 242, 244, 247, 253 transparency, 17, 29, 40, 55, 58, 59, 63, 64, 66, 79, 80, 92, 101, 103, 105, 106, 107, 111, 115, 123, 124, 126, 127, 128, 131, 134, 143, 148, 151, 157, 158, 187, 195, 198, 199, 201, 214, 216, 224, 227, 230, 234, 250, 251, 254 Trump, President Donald, 1, 25, 26, 35, 54

U underserved communities, 174, 182, 184, 185, 202, 205, 206, 241 United States (US), 12, 14, 23, 27, 30, 31, 37, 41, 43, 45, 48, 50, 51, 52, 53, 55, 56,

268 57, 61, 66, 71, 77, 78, 79, 83, 85, 86, 96, 109, 111, 112, 113, 115, 116, 119, 120, 122, 131, 140, 143, 147, 148, 151, 152, 161, 175, 182, 184, 186, 189, 196, 211, 215, 219, 234, 248, 261, 263

V voice assistance, 5 voluntary consensus standards, 29, 115

Index

W White House Office of Science and Technology Policy (OSTP), v, 28, 35, 72, 92, 96, 173, 174, 175, 248, 249, 251, 252, 254, 255, 256, 257, 261 workforce impacts, 2, 50