Opinions Throughout History: Robotics & Artificial Intelligence: Print Purchase Includes Free Online Access 9781642654813, 1642654817

This volume explores automatons, computing, mechanical engineering, automation in the Industrial Revolution, digital tec

323 15 14MB

English Pages 400 [520]

Report DMCA / Copyright

DOWNLOAD FILE

Polecaj historie

Opinions Throughout History: Robotics & Artificial Intelligence: Print Purchase Includes Free Online Access
 9781642654813, 1642654817

Table of contents :
Table of Contents
Introduction
Historical Timeline
1. Endowed with Motion
2. Automating Work
3. Fear Your Creation
4. Automated Ethics
5. The Computer Revolution
6. Approaching Intelligence
7. Factory Work
8. The Illusion of Language
9. The Perfect Body
10. Seasons of AI
11. Hands Free
12. Robotic Justice
13. Where No One Can Go
14. Mechanical Medicine
15. Playing Robot
16. War Games
17. Space Oddities
18. The Ultimate Strategy Machine
19. Homebots
20. Replicating Humanity
21. The AI Assistant
22. Robotic Citizenship
23. Biorobotics
24. Robotic Living
25. The Future of Work
26. Quantum Computing
27. The Singularity
28. Conclusion
Sources
Historical Snapshots
Bibliography
Glossary
Blank Page

Citation preview

https://online.salempress.com

The Series National Security vs. Civil & Privacy Rights Immigration Gender: Roles & Rights Drug Use & Abuse The Environment Social Media Issues The Death Penalty Voters’ Rights

Robotics & Artificial Intelligence

VIEW THIS BOOK ONLINE Grey House Titles Are Now Available on the Salem Press Online Platform!

opinions Throughout

history Robotics & Artificial Intelligence

Presidential Authority Guns in America Globalization Issitt

4919 Route 22, Amenia, NY 12501 518-789-8700 • 800-562-2139 • FAX 845-373-6360 www.greyhouse.com • email: [email protected]

GREY HOUSE PUBLISHING

opinions Throughout

history Robotics & Artificial Intelligence

opinionsthroughouthhistory_robotics_titles.indd 1

8/26/20 2:54 PM

Robotics_2.indd ii

10/2/2020 12:08:21 PM

opinions Throughout

history Robotics & Artificial Intelligence by Micah L. Issitt

GREY HOUSE PUBLISHING

opinionsthroughouthhistory_robotics_titles.indd 2

8/26/20 2:54 PM

PRESIDENT: Richard Gottlieb PUBLISHER: Leslie Mackenzie EDITORIAL DIRECTOR: Laura Mars PROJECT EDITOR: Annette Calzone MARKETING DIRECTOR: Jessica Moody PRODUCTION MANAGER: Kristen Hayes Grey House Publishing, Inc. 4919 Route 22 Amenia, NY 12501 518.789.8700 Fax 518.789.0545 www.greyhouse.com e-mail: [email protected] While every effort has been made to ensure the reliability of the information presented in this publication, Grey House Publishing neither guarantees the accuracy of the data contained herein nor assumes any responsibility for errors, omissions or discrepancies. Grey House accepts no payment for listing; inclusion in the publication of any organization, agency, institution, publication, service or individual does not imply endorsement of the editors or publisher. Errors brought to the attention of the publisher and verified to the satisfaction of the publisher will be corrected in future editions. Except by express prior written permission of the Copyright Proprietor no part of this work may be copied by any means of publication or communication now known or developed hereafter including, but not limited to, use in any directory or compilation or other print publication, in any information storage and retrieval system, in any other electronic device, or in any visual or audiovisual device or product. This publication is an original and creative work, copyrighted by Grey House Publishing, Inc. and is fully protected by all applicable copyright laws, as well as by laws covering misappropriation, trade secrets and unfair competition. Grey House has added value to the underlying factual material through one or more of the following efforts: unique and original selection; expression; arrangement; coordination; and classification. Grey House Publishing, Inc. will defend its rights in this publication. Copyright © 2020 by Grey House Publishing, Inc. All rights reserved. No part of this publication may be reproduced, stored in a retrieval system, or transmitted, in any form or by any means, electronic, mechanical, photocopying, recording, or otherwise, except for the inclusion of brief quotations in a review, without prior permission in writing from the publishers. Publisher’s Cataloging-In-Publication Data (Prepared by The Donohue Group, Inc.) Names: Issitt, Micah L., author. Title: Robotics & artificial intelligence / Micah L. Issitt. Other Titles: Robotics and artificial intelligence | Opinions throughout history ; 12. Description: Amenia, NY : Grey House Publishing, [2020] | Includes bibliographical references and index. Identifiers: ISBN 9781642654813 Subjects: LCSH: Robotics--History--Sources. | Artificial intelligence--History--Sources. | Automation--History--Sources. | Robotics--Public opinion--Sources. | Artificial intelligence--Public opinion--Sources. | Automation--Public opinion--Sources. Classification: LCC TJ211 .I87 2020 | DDC 629.892--dc23

Robotics_2.indd iv

10/5/2020 11:04:13 AM

Table of Contents Publisher’s Note .............................................................................................................................ix Introduction ....................................................................................................................................xi Historical Timeline .....................................................................................................................xvii

1.

Endowed with Motion

Robots in Myth and Legend (800 BCE–17th century) ................................................................... 1

2.

Automating Work

Mechanization and Human Capability (1590–1900) .................................................................. 15

3.

Fear Your Creation

The Robot Revolution (1600s–1920s)............................................................................................ 31

4.

Automated Ethics

Robotics, Ethics, and Morality (1920s–1950s) ............................................................................. 47

5.

The Computer Revolution

A History of Early Computers (1940s–1950s) .............................................................................. 59

6.

Approaching Intelligence

First Steps toward Modern Artificial Intelligence (1940s–1960s) ............................................... 73

7.

Factory Work

Development of Industrial Manufacturing Machines (1913–1980s)........................................... 87

8.

The Illusion of Language

Language Replicating Machines and Programs (1950s–1970s) ................................................. 99

9.

The Perfect Body

Robot Design in Pop Culture and Reality (1950s–2010s) ...........................................................119

10. Seasons of AI Development and Stagnation in AI Research (1950s–Present) ................................................. 131

Robotics_2.indd v

10/2/2020 12:08:21 PM

vi | Table of Contents 11. Hands Free The Search for Autonomous Transportation (1930s–Present) ................................................... 149

12. Robotic Justice The Use of Robots in Security and Policing (1970s–2010s)....................................................... 165

13. Where No One Can Go Robots in Dangerous Places and Dangerous Jobs (1980s–Present) .......................................... 179

14. Mechanical Medicine Robotics in Medicine and Health (1980s–Present) .................................................................... 199

15. Playing Robot Robots as a Toy and Hobby (1993–Present) ............................................................................... 217

16. War Games The Use of Robotics and AI in the Military (1994–Present) ...................................................... 229

17. Space Oddities Robotics in Space (1966–Present) ............................................................................................... 243

18. The Ultimate Strategy Machine Robots and the Great Human Strategy Games (1996–Present) ................................................ 265

19. Homebots Robots in the Home (1998–Present) ............................................................................................ 281

20. Replicating Humanity Robotic Approximations of Human Form and Function (2000–Present) ................................. 297

21. The AI Assistant Virtual and Digital Assistants (2011—Present) ......................................................................... 307

22. Robotic Citizenship Public Attitudes toward Robots and Robotics (2014–Present) .................................................. 319

Robotics_2.indd vi

10/5/2020 11:55:43 AM

Table of Contents | vii 23. Biorobotics The Use of Robotics to Repair and Enhance the Human Body (2017–Present)........................ 331

24. Robotic Living Smart Homes and Robotic Household Aides (2017–Present) .................................................... 345

25. The Future of Work How Robotics Could Impact the Future of the Working World (2019) ...................................... 369

26. Quantum Computing The Potential Future of Computing Technology (2019) ............................................................. 381

27. The Singularity The Transcendence of Technology (2020) ................................................................................... 395

28. Conclusion Robotics and AI: Oppressors or a New Utopia? ........................................................................ 409

Appendixes Notes .............................................................................................................................................411 Primary and Secondary Sources ................................................................................................ 421 Glossary ....................................................................................................................................... 425 Historical Snapshots ................................................................................................................... 439 Bibliography ................................................................................................................................ 461 About the Author......................................................................................................................... 479 Index ............................................................................................................................................ 481

Robotics_2.indd vii

10/5/2020 11:55:50 AM

Robotics_2.indd viii

10/2/2020 12:08:22 PM

Publisher’s Note Opinions Throughout History: Robotics & Artificial Intelligence is the twelfth volume in the Opinions Throughout History series from Grey House Publishing. Each single, in-depth volume traces public opinion on current, controversial topics from our nation’s early history to the present using a range of primary and secondary source documents, including newspaper and magazine articles, speeches, court decisions, and other legislation. These documents are accompanied by expert commentary and analysis to guide the reader through the process of understanding their historical significance and placing them in the context of how they contributed to, or are a reflection of, changing attitudes on important issues.

Content Following this Publisher’s Note is a detailed Introduction that defines the concepts of robotics and artificial intelligence and places their origins with the first tools made by man and the mythologies of ancient civilizations. A comprehensive Timeline is next, highlighting significant robotic and AI events from the first Greek myths about automatons in 8000 BCE to a 2020 announcement by Amazon about adding new AI capabilities to Amazon Echo. The 28 chapters that follow are a fascinating account of humanity’s complicated relationship with these increasingly present technologies. Envisioned by early civilizations as creations that could overcome human limitations and defend societies, modern robotics and AI have in fact achieved these goals in some measure. Advances in robotic surgery and prosthetics have augmented the human body. Robots are now used to do dangerous jobs like surveying volcanoes, diffusing bombs, cleaning up nuclear waste, and exploring space. As researchers push the limits of robotics and AI, they gather insight into the human mind and body. The attempt to create true artificial intelligence has resulted in discoveries about the human brain and how we process language. And automated military technologies like drones make it less necessary to put soldiers in life-threatening situations. On the flip side, robotics in industrial settings has resulted in the loss of millions of jobs worldwide and, as artificial intelligence progresses, it will make inroads into other sectors besides manufacturing industries. Drone warfare has become increasingly controversial,

Robotics_2.indd ix

10/2/2020 12:08:22 PM

x | Publisher’s Note not least because of civilian casualties. When ELIZA—the first “chatterbot” or “chatbot”— was developed in 1966 to simulate a psychotherapist, many patients believed ELIZA to be a real therapist and shared intimate personal details. This tendency to ascribe human characteristics to robots and artificial intelligences, now known as the ELIZA effect, informs both positive and negative reactions to these emerging technologies. As of 2020, despite significant gains, artificial intelligence programs can only create the illusion of sentience, mimicking human conversation and facial expressions. But “technological singularity”—a possible future in which these technologies have advanced beyond human control, changing civilization in unforeseeable ways—evokes both excitement and dread.

Arrangement Each chapter starts with a brief Introduction, List of Topics Covered, and the source document or documents discussed in the chapter. The text of the chapter relates these documents to their historical context, and details contemporary public opinion. Most are reprinted in their entirety and clearly distinguished by a shaded title bar. In addition to helpful subtitles, photos and other images enhance the text, and sidebars provide an often lighter perspective on the time period being discussed. Pull quotes and other visual elements increase accessibility. Each chapter ends with a brief Conclusion, thoughtful Discussion Questions, and a list of Works Used. Footnotes referenced in the text begin on page 411. The final chapter, 28, is a detailed conclusion that summarizes the current state of robotics and AI, and is followed by the Historical Snapshots section—a broad, bulleted overview of political, social, and cultural developments from 1880 to 2020 that help provide context and understanding of the political and social climate of the work’s timeframe. This volume also includes a complete list of Primary and Secondary Sources that are reprinted in the text, a Glossary of frequently used terms related to robotics and AI, a comprehensive Bibliography, and a detailed Index.

The Opinions Throughout History Series Robotics & Artificial Intelligence is the fourth and final 2020 volume, following Presidential Authority, Guns in America, and Globalization. The first 2021 Opinions Throughout History volume is Disease & Epidemics, followed by Law Enforcement, Poverty, and Healthcare.

Robotics_2.indd x

10/2/2020 12:08:22 PM

Introduction A Brave New World Robotics is a field of engineering concerned with creating machines that operate automatically to replace some form of human effort. Artificial intelligence (AI) is a related field that seeks to create artificially intelligent minds or beings. Both of these fields exist at the forefront of science and research, representing one of the most futuristic, yet also one of the most ancient, quests in human history. In the 2020s, robotics and AI touch nearly every facet of culture. Robotics drives development in many industries, while progressive organizations have likewise invested heavily in robotics and AI in hopes of addressing long-standing health and welfare issues. To some, these research fields provide hope for humanity, while to others robots and artificial intelligences represent an existential threat that could gain dominion over the Earth.

The Evolution of Technology Human technology began before the species Homo sapiens walked the Earth. The humanoids who preceded them learned to shape rocks and sticks into useful shapes. Tool use is not unique to humans. Other species—chimpanzees, gorillas, and orangutans, all primates closely related to the human race—also make and use tools. Technology can be described as applying science or knowledge to solve problems through the use of some kind of tool. What is the purpose of technology? At its most basic level, technology is used to do—or do more efficiently—things that cannot be done with bodies. A shovel is a more efficient way to dig than using one’s hands to scrape and remove dirt. Riding a horse, or floating down a river on a canoe, are more efficient ways than running, walking, or swimming to travel long distances. Tools and technology are about expanding the capabilities of the body, and humanity’s unique capacity for imagination has meant that humans are better able than most animals to imagine new technology. While many animals experiment with objects in their environment, humanity is the only species with the ability to imagine technology that doesn’t yet exist. Even in the ancient world, humans were able to envision machines that would enable them to travel great distances, protect them from invading armies, or handle household chores. In many ways, the road to robotics and AI began with religion. In their quest to understand the world, early humans imagined that a pantheon of magical creatures—

Robotics_2.indd xi

10/2/2020 12:08:22 PM

xii | Introduction some of whom were considered gods— created and shaped the world to make it habitable. Ancient myths and legends involved the animation of objects that are not normally alive. Stone and wooden beasts and humanoid automatons featured prominently in the lore of many ancient societies. These myths represent imaginative fantasy, but they also reflect the innate human desire to solve problems that seem beyond mere human capabilities. Fantastic mechanical creatures were frequently depicted in stories as ways to overcome physical threats or challenges, often products of the gods. The desire to achieve the power ascribed to mythological gods was one of the driving forces in the technological revolution. With the discovery of fire and later electricity, for instance, humanity learned to conquer darkness and could “let there be light” in the same way that mythological gods had granted the power of the sun. Mythology and religion even evolved to account for technology. In many cultures around the world, for instance, fire was a gift given to humanity by divine forces and thus served not only as a powerful tool but also a signifier of humanity’s special nature in comparison to other creatures.

Pain and Progress The conservative creed, in America and around the world, has long played an important role, attempting to preserve institutions and cultural characteristics against the relentless pace of progress. At its most compassionate, conservatism seeks to prevent people from getting lost as economic, social, and cultural realities change. It is arguable that the sense of cultural continuity that individuals in America and elsewhere around the world enjoy is largely thanks to the work of conservatives in resisting progress, especially when it comes in destructive forms. For example, the invention of the automobile virtually destroyed a once-thriving industry based on breeding, caring for, and outfitting horses. Thousands of Americans lost their livelihoods, including farriers (who make horse shoes), saddle makers, horse trainers, horse breeders, and even individuals who cleaned up after horses on the streets. The shift from horse power to gasoline power was not just about a country adopting a new form of transportation; it was a major cultural shift that forced Americans to deal with many new realities. The sound of car engines replaced the clomping of horse hooves. Cities started seeing more smog and soot from car engines. To be certain, this shift also created opportunities. Car manufacturing produced thousands of factory jobs, as well as opportunities for factory and construction managers, engineers, mechanics, designers, and the marketing and sales personnel responsible for promoting this new American product to the public.

Robotics_2.indd xii

10/2/2020 12:08:23 PM

Introduction | xiii Historical sources have recorded the psychological toll experienced by many Americans during the transition from agrarian to industrial culture. In comparison to working in fields and farms, factory work felt dehumanizing. Tradesman who handled entire production cycles now worked on factory lines, repeating the same simple tasks over and over again. With so many out of work, companies offered meager salaries, and the quality of living plunged for many, even as the country was, on paper, wealthier than ever. Wealth was increasingly concentrated in the hands of the factory owners and stakeholders. For many Americans, the transition from agrarian to industrial lifestyle eliminated a sense of controlling their own destiny, and fostered a workforce without independent skills or expertise, leading to the exploitation of laborers by the wealthy class. The shift from horses to automobiles is not unlike the shift to digital technology experienced in the 2000s, and this transformation is continuing with the shift to robotics and AI that has taken center stage as America enters the next great phase of industrialization in the 2020s. Like the automobile, the introduction of the robot—largely in the 1980s, though the technology was already decades in development by that time—resulted in the direct loss of jobs and put the future of many livelihoods in jeopardy. The automobile industry has again become a transformative force, adopting robotics at a much higher rate than many other industries and again serving as a focal point for discussions of how this new wave of industrialization is threatening American lives and families. Millions of Americans and citizens around the world lost their jobs to robotic replacements from the 1980s to the 2000s, and some estimates indicate that tens of millions more will lose their livelihoods between 2020 and 2030. Interestingly, the modern wave of automation and robotics is primarily replacing the repetitive jobs that workers were forced to accept in the 1800s and early 1900s, when society shifted to industrialized production. When companies discover that a machine can perform certain tasks at a fraction of the cost, the workforce is left facing an uncertain future and with little prospect of advancement. The problems experienced in America as a result of robotics and automation are more extreme than in some countries because America lacks a coherent movement to protect the working class. Only by sacrificing some measure of economic growth can laborers be protected from the shifts in the workplace.

The Unexplored Frontiers Ultimately, robotics and AI will become a more familiar part of American culture and global society. As this work makes clear, there is no legitimate way to stop the transition

Robotics_2.indd xiii

10/2/2020 12:08:23 PM

xiv | Introduction to automation, although it might be slowed, or made more palatable, by government. The good news for society is that robotics and AI will eliminate many tasks that are dangerous and unpleasant for humans to perform, provide convenience in the workplace and the home, and may also help to fill roles that are perennially problematic, such as managing childcare and work by making working from home easier or providing continuous monitoring for the elderly. Robotics can provide advantages for most if not all of humanity, though the transition will likely be destructive for many. It is quite likely, in fact, that robotics and AI will eventually result in a future in which work, as it is currently known, is no longer necessary. While some see this future as dystopian, this is not the only way that the future may unfurl. It is possible, with humane and sensitive management, that people freed from the need to work may do so out of desire and exploration. Perhaps more importantly, robotics and AI may prove invaluable tools for solving existential crises, such as how to adapt the world’s economy to address climate change, or how to explore space and other hostile environments that may hold the key for humanity’s future or at least contain secrets that will remake life on earth. In the field of medicine, from AI-powered therapies for insidious and deadly diseases to the ability to augment the human body to address dysfunction or improve healthy bodies, AI and robotics have the potential to literally remake humanity. Robotics and AI also have the potential to create surrogates for human effort. As of 2020, many of these surrogates are mostly disruptive. Robotic arms on factory lines are not new, but new generations of AI-powered machines are now threatening what were once considered uniquely human endeavors, such as computer coding and programming. However, there are also ways in which robotics and AI can create surrogates capable of tasks that humans simply cannot do. Outer space exploration is one example. The deadly conditions in space make it impractical and difficult for humans to explore space, but robots and AI can survey space for humanity, even providing direct visual and sensory data such that humans might someday be able to virtually visit alien worlds, experiencing the full sensory characteristics of those extraterrestrial worlds without being subjected to dangerous conditions. Robotics and AI might likewise enable humanity to explore the deepest oceans, or the earth’s interior crust, or the microscopic worlds within our own bodies. In using these technologies to replace human effort, scientists and engineers are fulfilling a vision first expressed by ancient humans who imagined fantastic and magical

Robotics_2.indd xiv

10/5/2020 11:16:08 AM

Introduction | xv machines created by the gods. Through collective effort and imagination, humanity has essentially made real the divine powers that our ancient ancestors once envisioned. Robotics might also enable humans to continue to be explorers in an era where the only frontiers left are beyond their innate capabilities. Exploration and discovery is an innate trait and an important part of the human psyche. Robotics and AI may enable humanity to continue to grow by learning and exploring, and extend humanity’s reach and vision rather than reducing its purpose. Works Used Hamilton, Jon. “Myth Busting: The Truth about Animals and Tools.” NPR. 23 Dec 2011. https:// www.npr.org/2011/12/23/143833929/myth-busting-the-truth-about-animals-and-tools. Nikiforuk, Andrew. “The Big Shift Last Time: From Horse Dung to Car Smog.” The Tyee. 6 Mar. 2013. https://thetyee.ca/News/2013/03/06/Horse-Dung-Big-Shift/. Nova, Annie, and John W. Schoen. “Automation Threatening 25% of Jobs in the US, Especially the ‘Boring and Repetitive’ Ones: Brookings Study.” CNBC. 25 Jan. 2019. https://www.cnbc. com/2019/01/25/these-workers-face-the-highest-risk-of-losing-their-jobs-to-automation.html. “What Does It Mean To Be Human?” Smithsonian. Human Origins. 13 Aug. 2020. https:// humanorigins.si.edu/evidence/behavior/stone-tools#:~:text=Early%20Stone%20Age%20 Tools,cores%2C%20and%20sharp%20stone%20flakes.

Robotics_2.indd xv

10/5/2020 11:16:20 AM

Robotics_2.indd xvi

10/2/2020 12:08:23 PM

Historical Timeline 8000 BCE:

Greek myths about automatons spread across Europe.

8th century BCE:

Homer’s The Odyssey is written. Homer’s The Iliad is written.

4th Century BCE:

Callistratus of Aphidnae writes about automatons.

16th and 17th centuries CE:

Japanese artisans popularize the karakuri ningyo automatons.

1560s:

England passes first laws barring automation in certain industries.

1796:

The Mechanical Turk, a fake chess-playing machine, is invented.

1804:

Jacquard loom is introduced in France.

1811:

The “Luddite” movement protesting workplace automation in England begins.

1820:

Babbage’s difference engine, an automatic mechanical calculator, begins the computer revolution.

1913:

Ford Motor company introduces the first assembly line.

1916:

One of the first automated weapons, The Torpille Terrestre or “Land Crocodile,” is used in World War I.

1920:

Playwright Karel Čapek coins the term “robotic.”

1927:

The Fritz Land film Metropolis reflects on both automation and the possibility of AI.

1936:

Alan Turing publishes On Computable Thought, which paves the way for the study of AI. Charlie Chaplin’s film Modern Life reflects on automation.

Robotics_2.indd xvii

10/2/2020 12:08:23 PM

xviii | Historical Timeline “Robot Liliput,” thought to be the first robot toy, is introduced by a Japanese company selling in the United States. 1939:

The Futurama exhibit at the New York World’s Fair inspires widespread interest in robotics and futurism. The Russian military utilizes remote control “teletanks” in their invasion of Finland.

1943:

Walter Pitts and Warren McCulloch publish A Logical Calculus of the Ideas Imminent in Nervous Activity, beginning the era of neural network programming.

1944:

The first programmable computer, Colossus, is created to help break German military codes during World War II.

1945:

American engineers, sponsored by the military, debut large-scale computer ENIAC.

1946:

The B-17 Flying Fortress, one of the world’s first automated aerial weapons, is introduced.

1948:

Isaac Asimov publishes the first of his series of science fiction stories featuring robotic beings.

1949:

William Walter Grey introduces his robot tortoises.

1950:

Isaac Asimov coins the idea of the “Laws of Robotics.” Alan Turing comes up with the Turing Test for AI systems. Alan Turing debuts the first computer algorithm for playing chess. The Atomic Robot Man toy is sold at the 1950 New York Sci Fi convention.

1951:

Robotics_2.indd xviii

Jay Wright Forrester’s Whirlwind computer debuts.

10/2/2020 12:08:23 PM

Historical Timeline | xix 1952:

The UNIVAC computer evaluates election results during the 1952 Dwight D. Eisenhower/Adlai Stevenson election. Bell Laboratories develops the Audrey speech recognition system.

1954:

A computer program capable of translating English to Russian is introduced. Robert the Robot starts a robot toy craze in the U.S.

1955:

Engineer Arthur Samuels introduces a program that can play checkers.

1956:

World’s first Artificial Intelligence conference is held. The pioneering AI program Logical Theorist is introduced. Robbie the Robot is introduced in the film Forbidden Planet.

1957:

Alex Bernstein debuts the first chess computer.

1958:

The LISP program introduces the field of natural language computing, which allows computers to read text, hear and interpret speech, and measure sentiment.

1959:

World’s first academic department focused on AI is established at Massachusetts Institute of Technology. Computer Assisted Manufacturing (CAM) is introduced at MIT. NASA introduces the Project Ranger program to create robotic craft able to survey the moon.

1960:

Robotics_2.indd xix

The first industrial robot, UNIMATE is introduced at General Motors.

10/5/2020 11:17:10 AM

xx | Historical Timeline Inventor and scientist Manfred Clynes coins the term “cyborg” for a blend of human and machine components. 1961:

Engineers at Stanford University introduce the “Cart,” which was the first automated “Lunar Rover.”

1962:

J.C.R. Licklider proposes a “galactic network,” foreshadowing the creation of the Internet. The cartoon series The Jetsons encourages interest in futurism. IBM releases the Shoebox speech recognition program.

1964:

Prominent social scientists ask President Lyndon Johnson to address the threat of automation. The robotic craft Ranger 7 transmits images successfully back to earth from the moon.

1966:

The first chatbot, ELIZA, is introduced. NASA introduces the Surveyor line of robotic space vehicles. Engineer Jim Sutherland becomes the first to install a computer to manage his home, a forerunner of the “smart home” trend of the twenty-first century.

1968:

The SHRDLU program is the first program capable of utilizing natural language commands.

1969:

The ARPANET, a precursor of the World Wide Web, debuts.

1970:

Shakey, the world’s first autonomously mobile robot, is introduced by SRI International.

1971:

DARPA releases their Harpy speech recognition machine, which has the vocabulary of a 3-year-old human child.

Robotics_2.indd xx

10/2/2020 12:08:24 PM

Historical Timeline | xxi 1972:

The PARRY program, a chatbot that imitates mental illness, is introduced. The first remote bomb disposal machine is invented in Britain.

1973:

The Lighthill Report discourages further government investment in AI in the UK. The first AI winter, a period of stagnation in AI research, begins.

1977:

The Star Wars film franchise introduces “droids,” robots designed to engage emotionally with audiences. Japanese company Tsukuba introduces the first serious automated car design. NASA launches the automated Voyager probes to explore the solar system.

1980:

Another surge in AI research begins, ending the first AI winter. AI research pioneer Marvin Minsky introduces the idea of “telepresence.”

1982:

The Novag Robot Adversary robotic chess game is released.

1984:

Carnegie Mellon University introduces a series of robots to explore and help remediate the toxic Three Mile Island nuclear meltdown site.

1985:

Carnegie Mellon University’s AI department introduces the mobile robot “Denny.” The first surgical robot, the PUMA 560, is introduced. The first robotic surgical procedure is completed.

Robotics_2.indd xxi

10/5/2020 11:18:01 AM

xxii | Historical Timeline The Omnibot 2000 becomes the world’s first truly robotic toy. 1986:

Engineers at Honda begin working on the ASIMO program to create a robot that emulates human movement and capabilities.

1987:

Reports from within the field lead to a second AI winter. German engineers introduce the VaMoRs automated car.

1990:

Dragon Systems Corporation releases Dragon Dictate, the first commercially available speech recognition software.

1992:

The Dante system robots are used to explore the Mt. Spurr volcano. The PROBOT surgical robot debuts in England. The robotic surgical assistant, ROBODOC, is introduced in the U.S.

1993:

Innovations in AI lead to a resurgence of interest, ending the second AI winter. AI theorist Vernor Vinge theorizes about a coming “singularity” that will mark the beginning of a new age of AI and the end of the age of humanity.

1994:

The Robot Wars competitions in San Francisco start the robotic sports craze.

1995:

The introduction of GPS control enables more advanced flying automated weapons. The first automated combat drone, the Predator, is introduced.

1996:

Robotics_2.indd xxii

NASA lauches the first Martian rover, a robot designed to explore the Martian surface.

10/2/2020 12:08:24 PM

Historical Timeline | xxiii Chess Grand Champion Garry Kasparov loses a chess match to IBM’s Deep Blue chess computer. The Real Doll, the first automated sex robot, is introduced in the United States. Neuroscientist Philip Kennedy installs electronic components in a human brain for the first time. 1997:

The world’s first robotically enhanced prosthetic, the C-leg, is introduced. Wireless fidelity (WiFi) technology is introduced to the consumer market and transforms consumer technology.

1998:

Sony releases the first in its line of robotic pet dogs, AIBO. Engineers at MIT and Los Alamos National Laboratory introduce the first quantum computer.

1999:

The PIONEER robots are used by Ukrainian officials to study the site of the Chernobyl nuclear disaster. Engineer Kevin Ashton introduces the idea of the “Internet of Things.”

2000:

The Da Vinci Surgical System medical robot line is introduced in California. The first “smart appliance,” a refrigerator, is introduced by the LG corporation.

2002:

The Roomba, a robot vaccum, is introduced.

2003:

The first telepresence robots are tested in Johns Hopkins hospital.

2004:

DARPA begins a program to investigate and develop driverless car technology.

Robotics_2.indd xxiii

10/2/2020 12:08:24 PM

xxiv | Historical Timeline The Opportunity and Spirit rovers land on Mars. 2010:

Apple debuts speech recognition on the iPhone with the first popular virtual assistant, Siri.

2011:

Robots are used to search the site of the 2011 Japanese earthquake. NASA launches the Curiosity Rover to explore the surface of Mars. The Spirit rover is deactivated after becoming stuck on the Martian surface.

2012:

NASA’s Voyager 1 officially leaves the solar system. Scientists find the long-theorized Higgs Boson particle, which is thought to be responsible for all physical forces.

2013:

Canadian researchers begin the HitchBot program to investigate human attitudes toward robotics.

2014:

The film Interstellar introduces innovative robots CASE and TARS, which demonstrate non-humanistic robot design. Knightscope Industries introduces modern robotic security guards for corporate use. The Amazon Echo includes Amazon’s virtual assistant program Alexa and starts the “smart home” trend.

2015:

The New Horizon spacecraft reaches the end of the solar system. The Tesla Company introduces the idea of partial automation in their vehicles. The HitchBot robot is destroyed in Philadelphia.

2016:

Robotics_2.indd xxiv

The strategy computer AlphaGo beats world champion Go player Lee Se-Dol.

10/5/2020 11:19:31 AM

Historical Timeline | xxv 2017:

One hundred sixteen representatives from the tech and robotics industries pen an open letter to the world’s powers warning against the development of automated weaponry. In a publicity stunt, Saudi Arabia grants citizenship to the robotic system called “Sophia.”

2018:

NASA’s automated probe, Voyager 2, becomes the second human-made object to leave the solar system. Honda engineers debut their 3E line of robots designed to aid in human life.

2019:

Research by Oxford University and other sources suggests that between 20 and 70 million jobs may be lost to automation by 2030.

2020:

Amazon announces plans to add new artificial intelligence capabilities to Amazon Echo, enabling user to teach their Echo how to more intelligently handle requests.

Robotics_2.indd xxv

10/5/2020 11:19:40 AM

Robotics_2.indd xxvi

10/2/2020 12:08:24 PM

ENDOWED WITH MOTION Robots in Myth and Legend (800 BCE–17th century)

1

Introduction Long before humans had any knowledge of mechanics, engineering, or electrical circuitry, they were dreaming of automatons: artificial beings that moved and performed tasks independent of human intervention. The concepts of robotics and artificial intelligence (AI) in Western culture reflect the traditions of Greek, Egyptian, and Asian mythology and the pioneering work of engineers who created the world’s first “automated” machines. The dream of robotics and AI reflects the constant and continuing effort to transcend human ability and to understand the human condition by attempting to imitate the function of the human body and mind. Topics Covered in this Chapter Include: • Greek mythology • Early automata • Leonardo da Vinci

This Chapter Discusses the Following Source Documents: Homer, The Odyssey Callistratus, Descriptions Yukou, Lie, Liezi Homer, The Iliad

Robotics_2.indd 1

10/2/2020 12:08:32 PM

2 | ROBOTICS & ARTIFICIAL INTELLIGENCE

T

he myths of many countries contain legends about artificial beings, some humanistic in form, others resembling animals or objects. Before the term “robots” came into fashion in the late 1920s, a moving mechanical device designed to look human was known as an “automaton,” from the Greek term meaning something that acts on its own. Early myths contained not only automatons but machines that imitated the behavior of animals or that gave autonomous motion to typically inanimate objects, like lamps, doors, or other household objects. The shared characteristic was that all of these mythological things had some form of “intelligence,” and so they might all be described as “artificial beings” of one type or another.

Silver didrachma from Crete (c 300 BC) depicting an ancient mythical automaton named Talos, via Wikimedia.

Mythological automatons and other artificial beings represent several goals common to the evolution of society. Ancient humans imagined ways to transcend the limits of biology with imagined artificial beings that had advanced abilities like incredible strength and speed. Myths of artificial beings reflect a desire to find a technological solution for a certain problem and also to understand form and function in nature. By imagining how nature could be replicated, philosophers and proto engineers also expressed a desire to better understand the natural world. These same motivations continue to guide scientists, engineers, and inventors in the fields of robotics and artificial intelligence. Mythological robots gave way to early machines with automatic movement. These curiosities were simplistic by modern standards, but represent the first steps toward the sciences of robotics and artificial intelligence as they exist in the twenty-first century. The crucial achievement of the early automated machines was to separate the action of the machine from the hands of the operator, thus demonstrating independent programmed action. While these discoveries might be no more complex than the simplest wind-up toys, this early automation fascinated the masses and also planted fear about humanity’s intrusion into the realm of divine creation.

Robotics_2.indd 2

10/2/2020 12:08:36 PM

Endowed with Motion | 3

Machines of the Gods What is a human? What is a living organism? Such questions hinge on one’s worldview, whether it’s grounded in the Abrahamic faiths, Buddhism, Taoism, or any other tradition. The Abrahamic faiths are a series of religions that emerged in the Middle East, the cradle of what is today called Western civilization. These religions began as an offshoot of the Greek and Roman mystery cults, which were alternative religions to Greek and Roman classical religious traditions. Greeks and Romans embraced a diverse group of gods, known as a “pantheon,” but followers of the mystery cults tended to choose one of those gods. The mystery cults also believed in a “messianic” structure, meaning that there is a central messenger from the divine realm sent to lead the people. Judaism was the first of the Abrahamic faiths, followed by Christianity, and finally Islam, which blended with North African nomadic tribalist religious traditions. All three faiths share the belief that there is a single god, depicted as male, who created humanity in his image and the world for the benefit of humanity.1 The Abrahamic faiths are an example of human-centric religions. The Abrahamic faiths have long stood in opposition to science because any development that threatens this human-centric view of the world threatens the basic precepts of their faith. Further, the Abrahamic religions rapidly evolved into authoritarian states that also viewed science as a challenge to their power. From very early on, Abrahamic faith leaders opposed the idea of creating machines that imitated or replicated the abilities of humans and other species, often utilizing the argument that participating in the creation of beings usurps a power that belongs only to God. Some have long argued that science and religion need not be incompatible, and many religious believers in the modern world embrace the findings of science. But, over the centuries, religion has been one of the greatest impediments to scientific research, and scientists were imprisoned, killed, or professionally ruined for voicing support for a scientific concept, such as that the earth revolves around the sun. Science and empirical logic eventually dominated the world despite Abrahamic religion, and the evolution of scientific thought in Western civilization owes more to the religious and cultural traditions of ancient Greece.2 Students of history often imagine ancient Greece as a single culture, but in reality it was a diverse group of city-states, each independent but united through a complex series of mutual defense and trade agreements. A number of studies have proven that more diverse environments produce more innovation.3 The wider range of viewpoints and experiences provides a more fertile substrate for new ideas and often leads to more interesting paths

Robotics_2.indd 3

10/2/2020 12:08:40 PM

4 | ROBOTICS & ARTIFICIAL INTELLIGENCE of cultural evolution. Ancient Greece encompassed influences from Asia, the Middle East, and Africa, and this filtered into Grecian philosophy and stimulated the evolution toward the scientific method. The religion of Greece also reflected this diversity as each city-state had similar gods but might embrace slightly different legends about those gods. The city-states also prized artists and craftsmen. This emphasis on art, philosophy, and innovation provided the preconditions for the development of scientific thought, and the legends and myths of the Greek gods also reflect this protoscientific evolution. Unlike the comparatively conservative traditions of the Abrahamic faiths, the ancient Greek religion was pantheistic, embracing a variety of gods and lesser divine beings that represented various manifestations of the natural world. The gods were not all-powerful or all-knowing, and their legends frequently reflected human traits or reflected upon natural forces, such as the tides, the rising and falling of the moon, and the changing of the seasons. The gods interacted with and often had sex with humans, resulting in extraordinary individuals called “demigods” or “heroes” capable of fantastic accomplishments and discoveries. The roots of Western science can be found in this interplay between the divine and pedestrian. The Greek god Hephaestus, who represented the arts of metalworking, masonry, and sculpture, might also serve as the Greek god of robotics. In a number of legends he is described as being capable of creating machines so complex that they were endowed with independent thought and action. The Greek pantheon was the source for the Roman gods, who often had similar roles but different names. After the Roman takeover of Greece, Hephaestus became Vulcan, who was likewise the god of fire, metalworking, and craftsmanship. The fantastic creations of Hephaestus/Vulcan are depicted in the famous works ascribed to the second-century author known as “Homer,” whose two most famous epic poems, the Iliad and the Odyssey, have been for scholars one of the best documents on Greek religion and mythology. In the Odyssey, Homer writes of Vulcan:

Robotics_2.indd 4

10/2/2020 12:08:44 PM

Endowed with Motion | 5

THE ODYSSEY by Homer

Source Document Excerpt On either side there stood gold and silver mastiffs which Vulcan, with his consummate skill, had fashioned expressly to keep watch over the palace of king Alcinous; so they were immortal and could never grow old.4

The me The mech mechanical c an a ic ical al d dogs ogss cr og crea created eate ted d by V Vulcan ulca ul can n (o (orr He Heph Hephaestus phae aesttus b based ased as ed on n th the e so sour source) urce ce)) we were re c called alle al led d Argyreos descriptions Khryseos (Golden) Kh (Gold lden)) and d Ar A gyreos (Silvery) (Si Sillvery)) and and d stand sttand d as one off the the first rstt d escriipti tions iin n tthe he Western canon of a robotic machine. The Greeks did not withhold the power to create machines that imitated life to the gods alone. Although Hephaestus/Vulcan may have invented this skill, it was a skill that could be learned by mortal humans. The craftsman and artist Daedalus is best known for the failed exploits of his son, Icarus, who attempted to fly using artificial wings to escape a tyrant king. Daedalus also created the famed Labyrinth of King Minos, an innovative maze designed to keep the minotaur (half-man, half-bull) from wreaking havoc on the kingdom. Many stories of Daedalus also ascribe to him nearly-divine power to create machines with fantastic properties, including moving machines. The Greek writer Callistratus, who lived between the third or fourth centuries CE, described the history behind several Greek statues and their relation to the Greek pantheon in his book Descriptions. Callistratus also claimed that Daedalus had the ability to create moving statues. In his depiction of the statue of Eros, Callistratus wrote:

DESCRIPTIONS by Callistratus

Source Document Excerpt On the Statue of Eros

“As I gazed on this work of art, the belief came over me that Daedalus had indeed wrought a dancing group in motion and had bestowed sensation upon gold. . . ”

Robotics_2.indd 5

10/2/2020 12:08:48 PM

6 | ROBOTICS & ARTIFICIAL INTELLIGENCE In some accounts, this ability was perfected later by the sculptor Praxiteles of Athens, who, unlike Daedalus, was an actual person whose existence has been confirmed by historians. He is remembered primarily for sculpting Aphrodite of Knidos, which is the first known lifesized depiction of the naked female form.5 In his description of the statue of Dionysus, Callistratus says of Daedalus and Praxiteles:

Descriptions continued On the Statue of Dionysus

Daedalus, if one is to place credence in the Cretan marvel, had the power to construct statues endowed with motion and to compel gold to feel human sensations, but in truth the hands of Praxiteles wrought works of art that were altogether alive. There was a grove, and in it stood Dionysus in the form of a young man, so delicate that the bronze was transformed into flesh, with a body so supple and relaxed that it seemed to consist of some different material instead of bronze: for though it was really bronze, it nevertheless blushed, and though it had no part in life, it sought to show the appearance of life and would yield to the very finger-tip if you touched it, for though it was really compact bronze, it was so softened into flesh by art that it shrank from the contact of the hand. It had the bloom of youth, it was full of daintiness, it melted with desire, as indeed Euripides represented him when he fashioned his image in the Bacchae. A wreath

of ivy encircled his head—since the bronze was in truth ivy, bent as it was into sprays and holding up the curly locks which fell in profusion from his forehead. And it was full of laughter, nay, it wholly passed the bounds of wonder in that the material gave out evidence of joy and the bronze feigned to represent the emotions. A fawn-skin clothed the statue, not such as Dionysus was accustomed to wear, but the bronze was transformed to imitate the pelt; and he stood resting his left hand on a thyrsus, and the thyrsus deceived the beholder’s vision; for while it was wrought of bronze it seemed to glisten with the greenness of young growth, as though it were actually transformed into the plant itself. They [sic] eye was gleaming with fire, in appearance the eye of a man in a frenzy; for the bronze exhibited the Bacchic madness and seemed to be divinely inspired, just as, I think, Praxiteles had the power to infuse into the statue also the Bacchic ecstasy.

In his description of the Statue of Memnon, Callistratus goes on to state that the artisans of Ethiopia had reportedly taken another step in the road toward robotics. While Daedalus had been able to create metal statues that could move and feel, the Ethiopians created statues that could also talk:

Robotics_2.indd 6

10/2/2020 12:08:55 PM

Endowed with Motion | 7

Descriptions continued On the Statue of Memnon

I wish to describe to you the miracle of Memnon also; for the art it displayed was truly incredible and beyond the power of human hand. There was in Ethiopian an image of Memnon, the son of Tithonus, made of marble; however stone though it was, it did not abide within its proper limits nor endure the silence imposed on it by nature, but stone though it was it had the power of speech. For at one time it saluted the rising Day, by its voice giving token of its joy and expressing delight at the arrival of its mother; and again, as day declined to night, it uttered piteous and mournful groans in grief at her departure. Nor yet was the marble at a loss for tears, but they too were at hand to serve its will. The statue of Memnon, as it seems to me, differed from a human being only in its body, but it was directed and guided by a kind of soul and by a will like that of man. At any rate it both had grief in its composition and again it was possessed by a feeling of pleasure according as it was affected by each emotion. Though nature had made all stones from the beginning voiceless and mute and both unwilling to be under the control of

grief and also unaware of the meaning of joy, but rather immune to all the darts of chance, yet to that stone of Memnon art had imparted pleasure and had mingled the sense of pain in the rock; and this is the only work of art of which we know that has implanted in the stone perceptions and a voice. Daedalus did indeed boldly advance as far as motion, and the materials of which they were made and to move in the dance; but it was impossible and absolutely out of the question for him to make statues that could speak. Yet the hands of Aethiopans discovered means to accomplish the impossible, and they overcame the inability of stone to speak. The story runs that Echo answered this Memnon when it spoke, uttering a mournful note in response to its mournful lament and returning a mimicking sound in response to its expressions of joy. The statue in questions both lulled to rest the sorrows of Day and caused her to abandon her search for her son, as though the art of the Aethiopans were compensating her by means of the statue for the Memnon who had been snatched away from her by fate.6

The depictions of living statues contained in records of Greece and Rome indicate that robotics was not the province only of the divine, a fundamental difference from the imagined roles of the divine and human in the Abrahamic faiths. The Greek pantheon empowered human creation and innovation, part of the reason that Greece became the cauldron for the philosophies that would later lead to the Western scientific revolution.

A Shared Vision The ancient Greeks were not the first or only ancient society that developed myths of fantastic mechanical or artificial life. In ancient Egypt, there were likewise legends of living metal machines, and these legends fascinated humans for untold centuries. Similar myths can also be found in ancient Chinese society. Like Greece, the ancient cultures of both China

Robotics_2.indd 7

10/2/2020 12:09:00 PM

8 | ROBOTICS & ARTIFICIAL INTELLIGENCE and Egypt were polytheistic and had societies that elevated artistry and craftsmanship. Attempts to create automated creatures grew out of this religious-cultural focus. China, like Greece, also had specific gods and heroes who represented the arts and crafts in their society. For instance, the famous philosopher Kǒngzǐ (Confucius) is described as having a friend named Gongshuzi who was one of the greatest craftsmen of the era. According to legend, Gongshuzi was able to create mechanical birds out of bamboo that would fly for up to three days. He is also said to have created wooden mechanical horses capable of carrying an actual carriage through the streets. Gongshuzi was eventually adopted as the patron saint of carpenters and craft people. Also like Greece, Chinese mythology does not restrict the capability to create automata to the gods, but describes this as one of the highest levels of human skill. Yan Shi the Artificer, a legendary engineer who lived in the ancient Chinese kingdom of Mo, crafted a life-sized automaton that could move and sing for the Fifth King of the Zhou Dynasty. The story comes from the Liezi text, a volume of stories composed sometime in the fourth century BCE:

LIEZI

by Lie Yukou Source Document Excerpt The king stared at the figure in astonishment. It walked with rapid strides, moving its head up and down, so that anyone would have taken it for a live human being. The artificer touched its chin, and it began singing, perfectly in tune. He touched its hand, and it began posturing, keeping perfect time. . . . As the performance was drawing to an end, the robot winked its eye and made advances to the ladies in attendance, whereupon the king became incensed and would have had Yen Shih executed on the spot had not the latter, in mortal fear, instantly taken the robot to pieces to let him see what it really was. And, indeed, it turned out to

Robotics_2.indd 8

be only a construction of leather, wood, glue and lacquer, variously colored white, black, red and blue. Examining it closely, the king found all the internal organs complete—liver, gall, heart, lungs, spleen, kidneys, stomach and intestines; and over these again, muscles, bones and limbs with their joints, skin, teeth and hair, all of them artificial. . . . The king tried the effect of taking away the heart, and found that the mouth could no longer speak; he took away the liver and the eyes could no longer see; he took away the kidneys and the legs lost their power of locomotion. The king was delighted.7

10/2/2020 12:09:04 PM

Endowed with Motion | 9

Machines for Thought In the modern world, robotics and artificial intelligence are correctly perceived as existing within the realm of science and engineering, but in the ancient world the idea of moving machines or proto-robots were often seen as representing the divine. This association between proto-robotics and divinity is clearly reflected in an unusual historical artifact from ancient Egyptian culture. Like ancient Greece, ancient Egyptian society was, for hundreds and hundreds of years, a series of city-states that were culturally united but also distinct. This cultural diversity proved an unusually rich environment for art and craftsmanship and also for mythology. The ancient Egyptian myths tell stories of a large, diverse pantheon of gods and demigods whose powers and adventures reflect on nature and human evolution. Also like ancient Greece, the ancient Egyptian culture had gods and demigods that represented the arts of sculpture and metalworking, and myths of these gods are tied to the development of technological innovations. Archaeologists studying ancient Egypt have found evidence that some religious sects in ancient Egypt used the myth of animate machines to convince visitors to the temples of worship that they were experiencing some form of divine experience. Large stone statues have been found dating back to at least 4500 years ago that contained a hollowed, open section where an individual, most likely a priest, might remain hidden. Visitors to the temples might seek advice from the gods and the priests concealed within the statues would offer it, speaking through the statue as if the voice was emanating from the god represented by the statue. It is unclear whether visitors to the temple were aware that the advice they received was coming from a priest or whether they believed that a god was speaking to them, but the invention of this religious theater demonstrates the association between divine magic and the spark of animation. It is also believed that the ancient Egyptians created ways to make some of their statues move, further invoking heavenly power by allowing statues of divine figures to appear to move on their own.a Works Used a McCorduck, Pamela. Machines Who Think. New York: Taylor & Francis, 2004.

Robotics_2.indd 9

10/2/2020 12:09:09 PM

10 | ROBOTICS & ARTIFICIAL INTELLIGENCE The fact that the mythology of automata was not confined to a single region or civilization helps to explain why the invention of actual automated devices also occurred in several places around the world simultaneously. The early inventors and engineers who created the first automata were inspired by the legends of gods and heroes. For instance, famed Italian inventor Leonardo da Vinci created an automated knight around 1495, one of the earliest recorded functional automated machines. Da Vinci’s creations were at the intersection of art, science, and engineering, reflecting the unusually inventive mind of their creator. In creating what is now known as his “robot knight,” da Vinci first built mechanical models of muscles and joints in the human body in an attempt to distill the messy business of biological form into components that could be replicated by wood and metal. Though the exact design of da Vinci’s creation has been lost, and engineers are therefore uncertain how the machine functioned, it is believed that it utilized a “clockwork” mechanism, such that a spring could be wound that then performed a certain set of movements once the winding mechanism was released. Da Vinci’s experiments in automation provide an excellent example of how the dream of independent machines progressed, from the myth of mechanical dogs and statues to inventors like da Vinci who attempted to imitate biological form and function through engineering.8 Across the ocean, in Japan, ingenious craftspeople of the sixteenth and seventeenth centuries created their own automated devices, known as “karakuri ningyō,” which essentially translates to “trick” or “magic” puppet. The term covers a wide variety of inventions powered by springs and metal gears that were sold for entertainment. Miniature humanoid versions of these devices were designed to imitate samurai swordsmanship, while automated maidens performed simple dance moves. The engineers who created these devices did not confine themselves to the human form, but also built moving animals as well. The proliferation of Japanese karakuri ningyō automata became popular in the 1700s, when hundreds of these devices were commissioned to serve as displays in shops and other official buildings. At the time, they were a popular curiosity, but writing from the era also indicates that the creation of karakuri ningyō was considered the cutting edge of mechanical design.9 Da Vinci’s moving knight, and the karakuri ningyō of Japan, were created for entertainment purposes, and automated machines of this kind are still displayed for amusement in museums and commercial window displays, but the ancient myths and the early experiments in automation were not only about entertainment. One can also find, in ancient texts and legends, the idea that automated devices could augment human life and abilities. For instance, da Vinci is believed to have created an automated carriage capable of moving on its own.

Robotics_2.indd 10

10/2/2020 12:09:15 PM

Endowed with Motion | 11

Model of a robot based on drawings by Leonardo da Vinci at 2005 Berlin Genie exhibit. By Erik Möller, via Wikimedia.

Robotics_2.indd 11

10/2/2020 12:09:19 PM

12 | ROBOTICS & ARTIFICIAL INTELLIGENCE

Karakuri with mechanism at the National Museum of Nature and Science, Tokyo, via Wikimedia.

Although this was a creation of fancy, the underlying idea is not far from the concept of the self-driving car that has become the focus of intense engineering and development in the twenty-first century. The Chinese legends of Gongshuzi and his wooden automated horses, reflect a similar dream, of creating automated devices that would enhance human life by helping humans travel from one area to another. Robotics and artificial intelligence are, on one hand, born from the desire to understand and imitate biological capabilities, but the other side of the robotics and artificial intelligence coin is the goal of enhancing human life and even of creating machines that are capable of performing tasks beyond human abilities. This motivation is also present even in the earliest legends of automata and living machines. In Homer’s Iliad, for instance, he writes of Hephaestus:

Robotics_2.indd 12

10/2/2020 12:09:23 PM

Endowed with Motion | 13

ILIAD

by Homer Source Document Excerpt “[Hephaestus] was sweating as he turned here and there to his bellows busily, since he was working on twenty tripods which were to stand against the wall of his strong-founded dwelling. And he had set golden wheels underneath the base of each one

so that of their own motion they could wheel into the immortal gathering, and return to his house: a wonder to look at. These were so far finished, but the elaborate ear handles were not yet on. He was forging these, and beating the chains out.”10

According to Homer, Hephaestus had created metal tripods (which were used to hold fire for light) that were capable of moving on their own and following him about as he moved through his home. Though mystical in the conception of Homer and observers of the era, Homer is essentially describing a simple “smart light,” far simpler in fact than many of the devices on the consumer market in the twenty-first century. The dream of automation was always also about enhancing human life by automating machines to substitute for human effort. As automation proceeded beyond the simple clockwork devices of early engineers to enter the era of electrical manufacturing, development of robotics followed both of these ancient paths, on one hand attempting to imitate and reflect the natural world, and on the other attempting to solve practical human problems through technological innovation.

CONCLUSION The appearance of artificial beings in mythology from around the world demonstrates that the idea of creating mechanical imitations of living beings is ancient. Scientists and philosophers are still participating in a tradition that can be traced back to ancient Greek myths and engineering and metalworking guilds. Despite the many years it took for mechanical engineering to advance enough to produce the first automatons, interest in making this technological leap never disappeared. The earliest automatic machines resulted in an unexpected identity crisis as humanity gradually began to realize that automation could, and perhaps would, change the nature of human life and work.

Robotics_2.indd 13

10/2/2020 12:09:27 PM

14 | ROBOTICS & ARTIFICIAL INTELLIGENCE

DISCUSSION QUESTIONS • How does the philosophy of artificial beings reflect the desire to surpass human capabilities? • How does the philosophy of artificial beings reflect the desire to understand nature and the function of the natural world? • Why do some religions oppose the creation of artificial life? • Is a robot an artificial life form? Why or why not? Works Used Beard, Mary. How Do We Look: The Body, the Divine, and the Question of Civilization. Liveright Publishing, 2018. “Callistratus.” Theoi Project. Classical Texts Library. 2017, www.theoi.com/Text/Callistratus. html. Coyne, Jerry. “Yes, There Is a War between Science and Religion.” The Conversation. 21 Dec. 2018, theconversation.com/yes-there-is-a-war-between-science-and-religion-108002. Dirks, Jerald. The Abrahamic Faiths: Judaism, Christianity, and Islam: Similarities and Contrasts. Amana Publications, 2004. Homer, The Iliad. Translated by William Cowper. Gutenberg Project. 5 Aug. 2005, www.gutenberg. org/files/16452/16452-h/16452-h.htm. Homer, The Odyssey. Translated by Samuel Butler. Gutenberg Project. Apr. 1999, www.gutenberg. org/files/1727/1727-h/1727-h.htm. Ivanhoe, Philip J., and Bryan W. Van Norden. Readings in Classical Chinese Philosophy. Hackett Publishing, 2001. Levine, Stuart R. “Diversity Confirmed to Boost Innovation and Financial Results.” Forbes. 15 Jan. 2020, www.forbes.com/sites/forbesinsights/2020/01/15/diversity-confirmed-to-boostinnovation-and-financial-results/#11e56cbcc4a6. Needham, Joseph, and Colin A. Ronan. The Shorter Science and Civilization in China. Cambridge UP, 1978. Rosheim, Mark. Leonardo’s Lost Robots. Springer, 2006. Sone, Yuji. Japanese Robot Culture: Performance, Imagination, and Modernity. Palgrave Macmillan, 2017.

Robotics_2.indd 14

10/2/2020 12:09:32 PM

AUTOMATING WORK Mechanization and Human Capability (1590–1900)

2

Introduction Robotics and computer science can replace human workers in many fields. Since the 1970s and 1980s, robots have been replacing human workers in a wide variety of fields, from factory workers to telephone receptionists to computer engineers and coders. Much like the factory production lines of the Industrial Revolution, the robotics revolution is forcing an economic and workforce transformation that many say benefits a few and disadvantages many. The fear that automation will eliminate jobs and deepen economic inequality is hundreds of years in the making, and evolved along with a series of inventions that pointed toward the potential for robotics and computers to perform work in a variety of fields. Topics Covered in this Chapter Include: • Automation • Unemployment • Early computers

This Chapter Discusses the Following Source Document: “Babbage Machine,” New Ulm Review, September 30, 1896

Robotics_2.indd 15

10/2/2020 12:09:36 PM

16 | ROBOTICS & ARTIFICIAL INTELLIGENCE

R

obotics and artificial intelligence research has followed two different but related paths. On one hand, robotics is about recreating and understanding the physical capabilities of humans and animals. On the other hand, robotics and AI have also been about replacing human effort. Some of the earliest practical developments in automation sparked concerns that robotics research would disrupt livelihoods, and this was the beginning of a workers’ rights movement against robotics and automation. Though early automation efforts in the nineteenth century proved less impactful, researchers were on the cusp of developing machines that could not only work but also calculate, a step toward machines that could replace both human physical and mental capabilities.

Replacing Workers In sixteenth-century England, both women and men covered their legs in hosiery, creating a booming stocking industry that shifted from cloth to silk stockings after Queen Elizabeth received a pair of silk stockings in 1561 from Mrs. Montague, a woman described in The Gentleman’s Magazine as the queen’s “silk woman.” The queen was reportedly quite taken with silk stockings, which became a popular gift in her court. The queen also invested in the silk textile industry.1 The Queen’s personal interest in silk stockings may have influenced her receptiveness when Lord Hunsdon brought her a pair of cloth stockings created by British inventor William Lee. Unlike the hand-knit, custom stockings that the wealthy wore at the time, these were reportedly somewhat crude, and they were not made of silk. But this was not the most unusual thing about them. The stockings had been made not by hand, but by machine, on the world’s first automated production machine, a device capable of knitting a pair of stockings. Anecdotes from the era suggest that Lee may have been motivated by a desire to spare his wife the difficult work of knitting, but, whatever the reason, Lee became a pioneer in automation, establishing a business with his brother in Calverton and eventually in London, where Hunsdon came across the automated stocking machine and brought it to the attention of the queen. Though the Queen was an avid fan of stockings, she was not as supportive of Lee’s invention. When Lee met with the Queen, she stated, “My Lord, I have too much love for my poor people who obtain their bread by the employment of knitting, to give my money to forward an invention that will tend to their ruin, by depriving them of employment, and thus make them beggars.”2 Queen Elizabeth might have seen the dangers inherent in industrial auto-

Robotics_2.indd 16

10/2/2020 12:09:40 PM

Automating Work | 17 mation, but her concerns would not prove universal. Lee unsuccessfully tried to obtain a patent once more with Queen Elizabeth for a machine that knit silk stockings and later with her successor, King James I. Lee then traveled to France, where his device was warmly received by King Henry IV. There were nine working machines in total when Lee arrived in France, and for five years, Lee’s machine-made stockings were popular in French high society. The assassination of King Henry IV sent Lee back to England, where the tide had turned, and he was finally able to obtain a patent. The automated knitting industry grew slowly over the next few decades. In the 1650s, the revolution in England that placed the nation under the leadership of Oliver Cromwell saw a surge of interest in progressive development. Cromwell was a fan of technological innovation, and he provided government funds to grow the automated knitting industry. A professional organization, still extant, known as the Worshipful Company of Framework Knitters, was established to manage the growing trade. In the mid-1600s, automated knitting was such a hot business in England that French corporate spy Jean Hindret traveled to England to steal technological secrets. He was successful, and King Louis XIV funded a shop for Hindret in Paris, which became the forerunner of a fast-growing automated knitting empire. Hindret was not the last to use corporate espionage to steal industry secrets. For decades, designs and refinements for automated knitting technology were in high demand, and there was a corporate war between would-be manufacturers until the technology became commonplace.3 By the nineteenth century, machine-made stockings were widely manufactured. It is unclear whether or not workers in the stocking industry felt the same kind of pressures that twentieth-century industrial workers faced, but there was at least one famous episode that reflects a now-familiar struggle to balance against mechanical production. In the early 1800s, a recession in both Britain and France led to protests within the textile industry, many of them involving workers in the mechanized stocking factories. In March of 1811 a group of workers smashed equipment at a textile manufacturing center near Nottingham. Over the next several months, mobs of workers attacked and smashed equipment at textile manufacturers across the country, carving a 70-mile path through the north of the country. The government was forced to post armed guards at textile factories, and the British Parliament even passed a law making “machine breaking” a capital crime. The Nottingham protestors, though small in number, set off a national controversy, and soon newspapers were filled with stories of nighttime raids and assaults on the soldiers. A symbolic leader was adopted by the mob, Ned Ludd, and they were subsequently referred

Robotics_2.indd 17

10/2/2020 12:09:44 PM

18 | ROBOTICS & ARTIFICIAL INTELLIGENCE

Penny magazine engraving of Luddites smashing looms, via Wikimedia.

to as Luddites. Ludd, sometimes referred to as Captain, General, or King Ludd, was never found and may have actually been based on a young apprentice named either Ludd or Ludham who took a hammer and smashed the needles of his knitting machine some 22 years earlier.

Robotics_2.indd 18

10/2/2020 12:09:48 PM

Automating Work | 19 The Nottingham protestors also sent letters to officials reportedly originating from Ludd, whom they claimed (drawing on local mythology) lived in Sherwood Forest. The protestors commissioned sledge hammers from a local blacksmith named Enoch, and chanted “Enoch made them, Enoch shall break them” while destroying knitting machines. In the years that followed, the Luddites remained the most famous “anti-technology” movement in Western history, and the term “Luddite” came to refer to any person who rejects or resists technology. This modern meaning, however, is something of a misconception. Most of the Luddites were highly skilled machine operators and mechanics. Rather than a return to the pretechnological age, the Luddites sought fair pay and better conditions.4 The Luddites caused such a stir that factory owners wrote letters to the monarchy requesting protection against violence but also engaged in it. In an 1812 mill protest near Manchester, the mill owner had his employees fire randomly into the crowd, killing at least three and wounding more than eighteen. Soldiers later helped break up the protests and at least five more Luddites were killed. Dozens of Luddites were also hanged for property damage, and more were sent to exile in Australia before the last Luddite protest was held in 1816.5 In the twenty-first century, the term “Luddite” is often used to refer to someone who is extreme in their rejection of something, especially technology. The imagined Luddite of the modern world is conservative and often conceived of as “backward thinking.” But the Luddite movement was not entirely about technology but wage inequity. Over the years, technological disruption has come to be portrayed as “human vs. machine,” but it is, in reality, more complex.

Thinking About Work The automated weaving revolution of the 1600s was a harbinger of the Industrial Revolution, which saw fundamental and often destructive shifts, but the real potential for automation would not be realized for centuries after the first automatic knitting machines were introduced. The timeline for automation as it is now known can be traced back to a series of key inventions of the early 1800s. Computers and calculators supplement human ability by taking over mental functions that are difficult or time-consuming for many. A simple addition problem such as 15,687,872 + 26,312,128 = 42,000,000 would take at least a minute, and there is a high probability that a person would make a simple mathematical error. A calculator, by contrast, can perform this and many other calculations instantly and with accuracy. Many people now carry devices

Robotics_2.indd 19

10/2/2020 12:09:52 PM

20 | ROBOTICS & ARTIFICIAL INTELLIGENCE capable of difficult calculations, and this is only a small part of what standard handheld computing devices can do. Apart from addition, subtraction, multiplication, and division, computers also replace human intellectual effort in the form of research. Algorithms now search through hundreds of results so that a person can find answers to questions that might have taken weeks, months, or years to find utilizing traditional research techniques. While innovations like Lee’s mechanical knitting machine proved that mechanics could supplant and supplement human muscle power, the advance of computation machines proved that machines are also better at many intellectual activities. The invention that set off computational evolution was the Jacquard loom, a simplistic device by modern standards but one that appeared nearly magical to many at its debut in 1804. The loom was created and patented by French weaver and merchant Joseph-Marie Jacquard. Jacquard’s device depended on a series of mechanical advancements attributed to French inventor Jacques de Vaucanson, but Jacquard synthesized this technology into a device that could weave complex patterns to create patterned fabrics. Prior to Jacquard’s invention, the process of making patterned cloth on a loom was quite complicated. A weaver used an assistant, known as the “draw boy,” who sat above the loom to raise and lower one set of threads over threads passed under and then over by the weaver, a very slow, intensive process. Typically, only the wealthiest were able to afford patterned weaving, and only a highly specialized set of weavers could make such fabrics. Jacquard’s machine effectively eliminated the need for a draw boy by utilizing a set of cards with holes in them to determine which threads would be raised or lowered on each pass of the needles. The pattern of holes punched in these cards, therefore, amounts to a rudimentary program, or algorithm, which tells the machine what pattern to weave, making the loom capable of reproducing the same pattern repeatedly with a fraction of the effort. This punch card system was the key to the next great age of automation. While an automated loom could follow and reproduce a single pattern, a replaceable punch card program could be used to weave any pattern so long as it was coded into the holes on a new set of punch cards.6 The Jacquard loom was a major, nearly instant, hit, and it brought about major changes in the textile industry at the same time the Luddite protests were happening. Jacquard’s automated weaving technology was so economically important that there were many instances of corporate espionage involving the theft of punch card sets used for in-vogue patterns. Companies offering automated weaving sprung up across Europe, and the explosion of

Robotics_2.indd 20

10/2/2020 12:09:56 PM

Automating Work | 21

Punch cards on a Jacquard loom, National Museum of Scotland, by Stephencdickson, via Wikimedia.

decorative, woven fabric completely upended the weaving industry. Jacquard’s technology made woven fabrics more common and less expensive, and patterned fabrics were soon available to consumers across the income spectrum. While the development of the Jacquard loom disrupted the draw boy profession, the larger growth of the weaving market more than likely made up for this in the longer run.7 Many historians have called Jacquard one of the most important figures of the Information Age because the punch card weaving patterns proved the key to unlocking the intellectual power of machines in the digital age. The holes could be used to encode human language and create coded instructions for a wide variety of analytical and intellectual processes. The holes and spaces between holes in the cards are analogous to the “ones” and “zeroes” that are still used to digitize programming for computational engines and are one of the most im-

Robotics_2.indd 21

10/2/2020 12:10:00 PM

22 | ROBOTICS & ARTIFICIAL INTELLIGENCE portant keys to unlocking computer programming, an innovation that would later become the foundation of the global economy. Another important leap forward between Jacquard and the Digital Age was the invention of what was, in effect, the world’s first automated calculator. A series of machines named after scientist and inventor Charles Babbage have come to hold a special place as some of the most important stepping stones toward the modern computer age. Charles Babbage, born in 1791 in London, took an interest in mathematics while studying at Cambridge University, where he became a calculus lecturer for the Royal Academy and helped to establish Britain’s Astronomical Society in 1820. During the late 1820s and 1830s Babbage was one of the preeminent mathematics professors in the Brit- Scientific pioneer Ada Lovelace, c 1850. Daguerreotype by ish academic community, but his lasting contri- Antoine Claudet, via Wikimedia. bution to science was a series of inventions that Babbage designed and worked on with his friend and protégé Ada Lovelace.8

Robotics_2.indd 22

10/2/2020 12:10:06 PM

Automating Work | 23

Great Stems Ada Lovelace (1815–52) was a woman of unusual ability and vision. The only child of poet Lord Byron and Anne Milbanke, Lovelace never knew her famous father and was raised by her mother, who was one of a rare sorority of women who pursued education in mathematics. Lovelace followed in her mother’s footsteps and studied science and music, becoming a formidable mathematician. Lovelace came to befriend Charles Babbage, the Lucasian professor of mathematics at Cambridge University who was already famous for designing and building the world’s first mechanical calculator. Lovelace met Babbage in 1833, when she was 17 years old, but they began a correspondence that lasted for the rest of his life. In the 1840s, Lovelace began working with Babbage on a more powerful and adaptable calculating machine that he called his “analytical engine.” Lovelace and Babbage shared a detailed understanding of the mechanics and mathematics of the device, but Lovelace was better able to eloquently express their ideas for the device and she became the chief diplomat for Babbage’s program. Lovelace’s contributions to Babbage’s search were largely ignored because she was a woman in a male-dominated society. Women have been systematically discouraged from participating in math and science education, often justified by the incorrect premise that males are more suited to the kind of thinking involved in these pursuits. Over time, however, Lovelace’s contributions to Babbage’s attempts to create his analytical engine have become more familiar and, more importantly, Lovelace has increasingly been given credit for her own unique contributions to the field of computing. It was Lovelace, not Babbage, who saw the real potential for what they were trying to create. Lovelace saw that mechanical analysis didn’t need to be limited to mathematical processes. She imagined creating punch cards that could be used to tell a machine how to create music or how to evaluate linguistic ideas. In the twenty-first century, it is Lovelace, not Babbage, who is often called the “first computer programmer,” or even the “mother of computing,” because she recognized the full potential of the process of computing. In her notes Lovelace wrote, “The science of operations, as derived from mathematics more especially, is a science of itself, and has its own abstract truth and value.” In another section, she writes, “This science constitutes the language through which alone we can adequately express the great facts of the natural world, and those unceasing changes of mutual relationship which, visibly or invisibly, consciously or unconsciously to our immediate physical perception, are interminably going on in the agencies of the creation we live amidst. A new, a vast, and a powerful language is developed for the future use of analysis, in which to wield its truths so that these may become of more speedy and accurate practical application for the purposes of mankind than the

Robotics_2.indd 23

10/2/2020 12:10:10 PM

24 | ROBOTICS & ARTIFICIAL INTELLIGENCE

Great Stems continued means hitherto in our possession have rendered possible.” In describing her “science of operations,” Lovelace was describing computing and the science of computer programming. Lovelace may have been the only person of her age who saw the true potential of what she and Babbage worked to create, and, though she would not live to see it, she is now posthumously honored for her vision.a Works Used a Morais, Betsy. “Ada Lovelace, The First Tech Visionary.” The New Yorker. 15 Oct 2013, www.newyorker.com/tech/annals-oftechnology/ada-lovelace-the-first-tech-visionary.

Robotics_2.indd 24

10/2/2020 12:10:15 PM

Automating Work | 25 The first invention created by Babbage and Lovelace was the “difference engine no. 1,” a complex device that essentially stands as the world’s first calculator, an extremely heavy wood and metal affair that required a six-wheeled apparatus to move. Although seemingly simplistic by modern standards, Babbage’s calculator was perhaps one of the key scientific achievements of the era. Although Babbage’s difference engines (they produced several versions) were a success, Babbage and Lovelace later set their sights on a more ambitious machine, something that they termed an “analytical engine.” Inspired by the same technology as Jacquard, the engine was able to perform any arithmetic calculation by utilizing punch card instructions. Babbage and his partners never completed a fully operational version of this more advanced engine, but the designs and ideas left behind by Babbage and Lovelace demonstrate that they had some sense of the broader implications of what they were creating, understanding that the ability to process programs to complete analytical processes signified a new vista of research and technological development, one that could enhance the intellectual capabilities of the species in ways never before imagined.9

Charles Babbage’s difference engine no. 1, assembled by his son, Whipple Museum of the History of Science. By Andrew Dunn, via Wikimedia.

Robotics_2.indd 25

10/2/2020 12:10:20 PM

26 | ROBOTICS & ARTIFICIAL INTELLIGENCE Though the most imaginative scientific minds may have been capable of seeing the potential of mechanical analysis, to most of the world the inventions were little more than amusing scientific exploits, more or less in keeping with the age of innovation and invention that characterized the early Industrial Revolution. Babbage’s engine was too complex and unwieldy to be practical at the time and, as a result, nearly resulted in his ruin. The Minneapolis, Minnesota, newspaper The New Ulm Review discussed Babbage’s abandoned analytical engine project in 1896:

“BABBAGE MACHINE”

New Ulm Review, September 30, 1896 Source Document Excerpt A Mechanical Calculator That Cost England £17000. An Invention That Attracted Wide Attention For Many Years, but Which Was Finally Abandoned as Being of No Practical Use. Babbage’s calculating machine would require a whole volume to do its marvelous history anything like justice. In the year 1819 Babbage really commenced operations by taking a number of wheels to a wheel cutter at Lambeth to have the teeth cut in them. Towards the end of July, 1828, the inventor commenced upon the difference engine, and he worked on it for four years regularly, with the result that, in October, 1827, he had spent £3,475. The very first difference engine made, however, was put together between the year 1820 and June, 1822. It consisted of from six to eight figures. A larger and more perfect machine was afterward commenced in 1823 for the government. The latter was to have six orders of differences, each consisting of about 20 places of figures. It was also intended to print the tables it computed. In 1827 Babbage’s wife died, and he was advised to travel on the continent, being in a low state of health. He left the drawings, however, in order that the work might be carried on in his absence, and

Robotics_2.indd 26

he also gave his banker instructions to advance £1,000 while he was away. In the beginning of 1829 the government directed the Royal society to inquire into the machine, and the administration also directed that a fireproof building should be constructed in East street, Manchester square, close to Babbage’s house, 1 Dorset street, in which it was intended to place the machines when finished. One day early in 1832, finding he could no longer make payments in advance, Babbage informed the engineer in charge of the works that in future he would not pay him until money was received from the treasury. Thereupon the mechanician struck work and dismissed his men. One of these, in receipt of 2 guineas a week, was afterward the famous engineer, Sir J. Whitworth. Babbage’s troubles had just commenced. His best draftsman came to him one day and said he had just received a tempting offer from the French government, whereupon his tortured employer had to give him a substantial increase of salary in order to retain his services. After the strike of the inventor’s men years of delay and anxiety followed, Babbage applying repeatedly to the government for its decision upon the subject, but in vain. Notwithstanding that the difference engine was suspended, this indomitable man still continued

10/2/2020 12:10:24 PM

Automating Work | 27

continued

his inquiries and, having discovered principles of far wider extent, he ultimately embodied them in the analytical engine. Both machines can be seen on application at the South Kensington museum. For upward of 20 years Babbage maintained, in his own house and virtually at his own expense, an elaborate establishment for carrying out his views. He died at his London house on Oct. 18, 1871, and Sir Robert Peel admitted to the house of commons, in March, 1843, that, although £17,000 had been spent by the government on the machine, Babbage himself had never received a shilling. And yet the invention was not wholly valueless. An eminent and wealthy manufacturer of Manchester came to London and saw this machine, and, on inspecting closely, he found mechanical contrivances which he subsequently introduced with the greatest advantage to his own spinning machinery. Of course, even after the machine had been definitely cast adrift by the government, a vast amount of interest was taken in it by the public. Many members of both houses of parliament were very fond of putting puerile questions to the inventor. ‘Pray, Mr. Babbage,’ cried one of the ancient dandies, ‘if you put the wrong figures into the machine would the right answers come out?’

Robotics_2.indd 27

The fame of Babbage’s calculating machine spread to the ends of the earth. Count Strzyelecki once told Babbage that the Chinese inquired after it. The guileless Celestials were anxious to know whether the machine could be carried in the pocket. The inventor assured them, however, through his excellency, that ‘it was essentially an out of pocket machine.’ The difference engine was not exhibited in 1851. Its loan was refused to New York and also to the Dublin exhibition of 1847. It was, however, exhibited in the exhibition of 1862, but space for its drawings was refused, and that the authorities had a low opinion of the thing will be evident from the fact that a payment of 6 shillings a day for a competent person (formerly Babbage’s secretary) to explain the mechanism was refused by the commissioners. General Babbage, the inventor’s son, assured me what Wellington, when premier, went to Lambeth to personally inspect the machine, and, having seen it at work—for it is quite perfect in its way—he directed the chancellor of the exchequer to arrange further grants, his idea being that the calculating part should be finished first, in order that there might be something of real value to show to parliament in return for the money of the nation. General Babbage further assures me that when the machine was abandoned it could, in his opinion, have been entirely completed for £500.10

10/2/2020 12:10:29 PM

28 | ROBOTICS & ARTIFICIAL INTELLIGENCE Babbage and his team never made a fortune from their groundbreaking work on the difference engine or the more ambitious analytical engine, but history has justified their effort. Ultimately, the Babbage engines were the earliest calculators and the first step toward the digital and information ages. The full potential of mechanizing intellectual activity would not become clear for more than a century after Babbage began his ultimately aborted effort, but the Babbage engines have since become cherished monuments to this world-changing scientific and industrial effort.

CONCLUSION Robots and machines are tools, created by and for people, and they have measurably advanced human society. However, automation and the process of replacing human workers is an example of individuals utilizing technology for their own benefit. The fact that Queen Elizabeth refused to grant a patent for the first stocking machine shows that since the beginning of the automation debate, there have been some politicians who are willing to prioritize the welfare of the laboring class, even if it means shutting the door, at least temporarily, on an advancement that might greatly enhance an industry. As technology advanced and robotics began to seem a more realistic possibility, fear of robotics began to manifest at all levels of popular culture.

DISCUSSION QUESTIONS • Was Queen Elizabeth correct in denying a patent to William Lee? Why or why not? • Are women still discouraged from pursuing education and/or careers in the STEM subjects? Explain your perspective. • How might politicians and business leaders handle automation in a way that protects the welfare of laborers? • What are some careers that are unlikely to be replaced by machines, at least in the near future?

Robotics_2.indd 28

10/2/2020 12:10:34 PM

Automating Work | 29 Works Used “Babbage’s Analytical Engine, 1834–1871, Trial Model.” Science Museum Group. 2020, collection. sciencemuseumgroup.org.uk/objects/co62245/babbages-analytical-engine-1834-1871-trialmodel-analytical-engines. “Babbage Machine.” New Ulm Review. Brown County, MN. 30 Sept 1896. Chronicling America: Historic American Newspapers. LOC, Library of Congress, chroniclingamerica.loc.gov/lccn/ sn89081128/1896-09-30/ed-1/seq-7/. Barlow, Alfred. The History and Principles of Weaving by Hand and by Power. London: Sampson Low, Marston, Searle & Rivington, 1884. Binfield, Kevin. Writings of the Luddites. Johns Hopkins UP, 2004. Essinger, James. Jacquard’s Web: How a Hand-loom Led to the Birth of the Information Age. Oxford UP, 2004. Fava-Verde, Jean-François. Silk and Innovation: The Jacquard Loom in the Age of the Industrial Revolution. Histancia, Ltd, 2011. Laskow, Sarah. “A Machine That Made Stockings Helped Kick Off the Industrial Revolution.” Atlas Obscura. 19 Sept. 2017, atlasobscura.com/articles/machine-silk-stockings-industrialrevolution-queen-elizabeth. Nichols, John. The Gentleman’s Magazine and Historical Chronicle, vol. 103, no. 2. 1833, p. 212, babel.hathitrust.org/cgi/pt?id=mdp.39015027525602&view=1up&seq=234&q1=silk%20 stockings. Stearns, Peter N. The Industrial Revolution in World History. Routledge, 2013. Swade, Doron. The Difference Engine: Charles Babbage and the Quest to Build the First Computer. Penguin Books, 2002.

Robotics_2.indd 29

10/2/2020 12:10:38 PM

Robotics_2.indd 30

10/2/2020 12:10:42 PM

FEAR YOUR CREATION The Robot Revolution (1600s–1920s)

3

Introduction While the introduction of machines raised practical fears about the welfare of workers and the future of the global labor market, fear of machines goes deeper. Humans have long imagined that mucking about with the creation of sentient beings might backfire in other ways, perhaps leading to the creation of a new artificial species that doesn’t much want to share Earth with its creators. These fears are rooted in the conservatism of the Abrahamic faiths, which encourage adherents to be fearful of any activity that oversteps humanity’s designated role in the world, and versions of this fear are still evident in human thinking about robotics and artificial intelligence around the world. Topics Covered in this Chapter Include: • Religion • Jewish mythology • Early robot fiction

This Chapter Discusses the Following Source Document: Čapek, Karel, R.U.R. (Rossum’s Universal Robots), 1920

Robotics_2.indd 31

10/2/2020 12:10:43 PM

32 | ROBOTICS & ARTIFICIAL INTELLIGENCE

T

here are many separate motivations that led humans to embark on the first steps toward robotics and the creation of artificial life, and one of the most important was the desire to expand human physical and mental capabilities. Early on, humans discovered that they could greatly benefit by capturing and training wild beasts. Horses and camels expanded the reach of human migration, while oxen and other wild cattle proved useful for work requiring strength and endurance. Humans also learned that they could capture and subjugate other humans, who could perform delicate work like picking cotton for textiles and perform other tasks that animals, no matter how well trained, could not perform. Slavery spread from culture to culture for tens of thousands of years, and the modern world was built on this exploitation.

In the societies where scholars first envisaged artificial life, such as ancient Greece, Rome, Egypt, and China, slaves and domesticated animals played an important role. Visions of created beings were likely inspired by the benefits derived from slavery and imaginative thinkers produced visions of stone, metal, and other fantastic created beings that might serve humanity. Another motivation driving this branch of science is the desire to understand the physical nature of life and consciousness. From the earliest automated machines to the modern field of robotics, scientists and engineers look to nature in determining how to design robotic and cybernetic machines and devices. Robotics and AI also exist alongside another controversial and often fear-invoking branch of science, the effort to create new kinds of biological life. This field, still in its infancy, is perhaps even more contentious than robotics and AI as there are many who view the artificial creation of biological life as “unnatural.” It was not until the Industrial Revolution that the possibility of artificial life, biological or mechanical, seemed close to reality, and it was then that some began to explore the darker side of this process. What if humanity created the seeds of its own destruction by embarking on the path toward artificial life? What if humanity was embarking on a path that put the species in conflict with God or that invoked previously unknown natural consequences? The earliest fears of robotics and artificial beings typically involve the idea that human progress can unwittingly lead to terrible unexpected consequences, and thus the early fears of robotics in legend and fiction also reflect the fear of progress itself.

Robotics_2.indd 32

10/2/2020 12:10:45 PM

Fear Your Creation | 33

The Prometheus Problem Prometheus is a Greek and Roman mythological figure depicted as a trickster whose own cleverness had unintended consequences. Prometheus was not one of the standard Greek pantheon of gods but is often depicted (incorrectly) as one of the Titans, a rival group of magical beings who competed with the Olympic Gods under the leadership of Zeus for control of the world. There are many different myths about Prometheus, and each provides a slightly different take. In most iterations he is depicted as the child of Iapetus and Clymene, who was given the task of creating humanity and did so by crafting the first humans from clay. In other versions, Prometheus did not create humanity but was one of the deistic figures chosen to guide their development.1 After the creation of humanity and the other creatures, Prometheus and his brother Epimetheus were both ordered to provide gifts that would allow them to survive and to thrive. Epimetheus provided a warm coat to the furred animals and wings to the flying animals, but ran out of gifts before he arrived at humans, explaining why humans are “naked” and have to use technological means to survive. Prometheus felt sorry for humanity and stole fire from the workshop of Hephaestus, god of metalworking and carpentry. Prometheus also taught humans to use fire to work with metal, and he is remembered as a patron of science and technology. Zeus was angered by the theft, however, and ordered Prometheus chained to a rock while an eagle ate his liver, which regrew each night for the eagle to eat again.2 Prometheus represents the human tendency to discover and experiment, and Zeus represents the sanctity of law and tradition. The name Prometheus literally translates into “forethought,” while the name of his brother translates as “afterthought,” and tales of the brothers are often used to demonstrate the benefits and potential pitfalls involved in how one approaches problems. The figure of Prometheus can, therefore, be treated as either a villain or a hero depending on one’s perspective. Though he violated Zeus’s laws to do so, Prometheus’s gift of fire made him a patron for metalworkers and potters and for the early pseudoscientific theorists whose philosophical explorations ultimately led to the establishment of science. Many Greek myths reflect the real challenges and choices that humanity makes in its quest for advancement, some of which lead to great leaps forward but can carry unintended consequences. It is the latter perception of the myths of Prometheus that inspired the novel Frankenstein by Mary Shelley. Shelley was the wife of famed poet Percy Bysshe Shelley and the daughter of two famous academics. Her mother, Mary Wollstonecraft, was an important

Robotics_2.indd 33

10/2/2020 12:10:49 PM

34 | ROBOTICS & ARTIFICIAL INTELLIGENCE

Prometheus mural at Pomona College by José Clemente Orozco, c 1930, via Wikimedia.

figure in the fields of moral and political philosophy, and her writings helped to illuminate the subjugation of women in Western culture.3 Mary Shelley’s father, William Godwin, was an impactful British author and philosopher credited with creating philosophical anarchism, in which he argued that government is a corrupting force within society.4 Mary Shelley’s own accomplishments were largely overshadowed, but Frankenstein ultimately granted her lasting celebrity. The story of how Shelley composed Frankenstein has been often told, and there are many versions in circulation. Her own account was published in a preface to the 1931 edition, and she claims that she conceived of the novel while vacationing with her husband, Lord Byron and other famous friends. One night, conversation veered onto the subject of German ghost stories, and Lord Byron suggested that each member write their own ghost story. Over the next several weeks, Shelley met this challenge by producing Frankenstein.5

Robotics_2.indd 34

10/2/2020 12:10:53 PM

Fear Your Creation | 35 The novel tells the story of scientist Victor Frankenstein, who is on the verge of a new scientific discovery, the ability to animate non-living tissue to create life. Frankenstein creates an artificial man using pieces of corpses, and the creature, who is never named, initially seeks compassion and affection from his creator but is later moved to violence by his treatment at the hands of those he meets. Recognizing that he cannot fit in with regular society, the creature asks Frankenstein to create a female companion, a request Frankenstein initially grants. But Frankenstein later determines that he can no longer “play god,” and the creature kills Frankenstein’s bride. Following the murder, Frankenstein pursues the creature into the arctic wilderness, where he dies, and the creature decides to kill himself.6 Shelley’s novel was first published with the longer title of Frankenstein, or, The Modern Prometheus, and the novel has associations with the ancient Greek myth. Shelley’s “modern” Promethean story firmly stands behind the perspective of Zeus in Greek mythology, as Shelley was raised in an environment dominated by Christian thought. However, the symbolism and the message of Frankenstein goes beyond a simplistic religio-cultural reflection on the perceived spiritual primacy of creating life, and also reflects a deeper meditation on progress and individualism. Frankenstein was a man of uncommon ambition who imagined that his research would enable him to defeat death. His ambition drove him to ignore the consequences of his scientific experimentation, and it was not until he was faced with the creature that Frankenstein began to see his faults. The novel Frankenstein is frequently discussed by historians studying robotics and artificial intelligence because the creature, though fashioned of dead flesh brought to life through the “divine spark” of lightning and scientific ingenuity, was nonetheless one of the earliest examples of artificial life in Western fiction. Frankenstein’s creature was the automaton of myth and the analog of the artificial intelligences that scientists around the world are striving to create in the twenty-first century. The story of Frankenstein and his creature reflects many of the ideas that still fuel fear of robotics and AI in the modern world. There are echoes of the idea that “playing god” will have unforeseen consequences and a warning about the dangers of unchecked and irresponsible ambition.

The Slaves Revolt Shelley’s novel was written during the Industrial Revolution, when rapid advancements in science and technology had suffused popular culture with conflict and uncertainty. Technological progress forced humans to confront the loss of traditional lifestyles. Gender and racial norms were changing, and the power structure of the world’s dominant societies was evolving. There was optimism for the ways that technology was altering daily life but also

Robotics_2.indd 35

10/2/2020 12:10:58 PM

36 | ROBOTICS & ARTIFICIAL INTELLIGENCE

Edison Studios 1910 motion picture of Frankenstein. By J. Searle Dawley, Edison Kinetogram, via Wikimedia.

regret for lifestyles that were being lost. Shelley’s Frankenstein serves as a perfect reflection of this period in history and echoed the fear of unchecked progress and ambition that many saw reflected in their changing societies. Just over a century later, the Industrial Revolution had transformed into what some historians call the “Technological Revolution,” a period of even more intense transformation. Whereas the idea of artificial beings was still more mystical than realistic in Shelley’s time, the incredible mechanical advancements of the Technological Revolution seemed to indicate that the possibility of artificial beings was very much on the horizon. It was in the midst of this transformative period that Czech writer Karel Čapek created a now famous meditation on the possible consequences of technological advancement that still echoes with readers and philosophers.

Robotics_2.indd 36

10/2/2020 12:11:02 PM

Fear Your Creation | 37 Čapek’s 1920 play, R.U.R. (Rossum’s Universal Robots), is an influential fable of technological danger and the source of the term “robot,” which is derived from the Czech term for “slave.” Čapek grew up in the small town of Úpice and studied philosophy at the Academic Gymnasium in Prague. He was excused from military enrollment (a necessity for young Czech men of the period) because of a spinal deformity and worked as a tutor and newspaper editor. Čapek became a well-known figure in Prague’s intellectual circles, but he did not gain widespread recognition until his groundbreaking play debuted in 1921.7 In the early 1920s, World War I, the world’s first modern technological war, had just ended. Čapek, like many others, had witnessed the carnage wrought by tanks and chemical weapons. Many of those interested in social justice around the world had become more skeptical about the benefits of technological advancement, and this inspired Čapek to write his now famous play. In an interview with the London Saturday Review after the play premiered in Britain, Čapek described the play as “the comedy of science,” in which the “product of the human brain has escaped the control of human hands.” Čapek’s R.U.R. is set in a world transformed by the introduction of robotics. In the story, a man named Rossum, whose name translates as “Mr. Brain,” or “Mr. Intellectual,” is a marine biologist who discovers a groundbreaking chemical and is searching for the key to create life to prove that God does not exist or is irrelevant. Rossum’s nephew, “Young Rossum,” sees financial potential in his uncle’s research and begins creating mechanical beings called robots as slaves to perform work for their human owners. Decades later, robots are everywhere and everyone owns them. The story then introduces new characters involved in an effort to free the robots, commenting on the exploitation of labor and issues involving racism and slavery. It revealed that the human birth rate is falling and that advancements in robotics have created more advanced robots with intellectual capabilities very much like humans. One of the characters, Helena, destroys the secret formula needed to create robots out of fear for where this development is leading. In the second and third acts, the robots revolt against their human creators. They kill all of the human characters of the play except one, a laborer named Alquist, whom they spare because he, like the robots, is a manual laborer. In this epilogue to the play, Čapek brings the story full circle, demonstrating that the unintended product of humanity’s cruelty has inherited the earth.

Robotics_2.indd 37

10/2/2020 12:11:07 PM

38 | ROBOTICS & ARTIFICIAL INTELLIGENCE

R.U.R. (ROSSUM’S UNIVERSAL ROBOTS) Karel Čapek 1920 Source Document Excerpt

Epilogue

Alquist. (Seated at table R.C., turning pages of book) Oh, God, shall I never find it? Never? Gall, Hallemeier, Fabry, how were the Robots made? Why did you leave not a trace of the secret? Lord, if there are no human beings left, at least let there be Robots. At least the shadow of man. (Turning pages) If I could only sleep. Dare I sleep before life has been renewed? Night again. Are the stars still there? Of what use are the stars? When there are no human beings. (Examining a test tube) Nothing. No. No. I must find it. I must search. I must never stop, never stop—search—search— (KNOCK at door L.) Who is it? (Enter a Robot Servant.) Servant. Master, the committee of Robots is waiting to see you. Alquist. I can see no one. Servant. It is the Central Committee, Master, just arrived from abroad. Alquist. Well, well, send them in. (Exit Servant L.) No time—so little done. (Re-enter Servant with Radius and group of Robots. They stand in group L. and C., silently waiting.) What do you want? Be quick; I have no time. Radius. Master, the machines will not do the work. We cannot manufacture Robots. (Other Robots remain two abreast at L.C., right foot forward.) 1st Robot. We have striven with all our might. We have obtained a billion tons of coal from the earth. Nine million spindles are running by day and by night. There is no longer room for all we have made. This we have

Robotics_2.indd 38

accomplished in one year. Alquist. For whom? Radius. For future generations—so we thought. But we cannot make Robots to follow us. The machines produce only shapeless clods. The skin will not adhere to the flesh, nor the flesh to the bones. 2nd Robot. Eight million Robots have died this year. Within twenty years none will be left. 1st Robot. Tell us the secret of life. Radius. Silence is punishable with death. Alquist. Kill me, then. Radius. (Two steps to C., followed by Others—open hands, close when stopped) Through me, the governments of the Robots of the world commands you to deliver up Rossum’s formula. (Gesture of despair from Alquist.) Name your price. (Silence) We will give you the earth. We will give you the endless possessions of the earth. (Silence) Make your own conditions. Alquist. I have told you to find human beings. Radius. There are none left. Alquist. I told you to search in the wilderness, upon the mountains. Radius. We have sent ships and expeditions without number. They have been everywhere in the world. There is not a single human left. Alquist. Not even one? Why did you destroy them?

10/2/2020 12:11:11 PM

Fear Your Creation | 39

continued Radius. We had learnt everything and could do everything. It had to be. 2nd Robot. We had to become the masters. Radius. Slaughter and domination are necessary if you would be human beings. Read history. 1st Robot. Teach us to multiply or we perish. Alquist. If you desire to live, you must breed like animals.

then, on us. Alquist. It would kill you. Radius. You shall have all you need. A hundred of us. A thousand of us. Alquist. No, no. Stop, stop. Radius. I tell you to take live bodies. Find out how we are made.

1st Robot. You made us sterile. We cannot beget children. Therefore, teach us how to make Robots.

Alquist. Am I to commit murder? See how my finger shakes. I cannot even hold the scalpel. No, no, I will not.

Radius. Why do you keep from us the secret of our own increase?

Radius. Take live bodies, live bodies. (Walks toward Alquist.)

Alquist. It is lost.

Alquist. Have mercy, Robots.

Radius. It was written down.

Radius. Live bodies. (Right hand up over Alquist. All Robots’ left arms still back.)

Alquist. It was—(Rising) burnt. (All draw back one step in consternation.) I am the last human being, Robots, and I do not know what the others knew. (Sits.) Radius. Then make experiments. Evolve the formula again.

Alquist. (Rising) You will have it. Into the dissecting with you, then (Hits Radius on the chest. Radius draws back.) Ah, you are afraid of death. Radius. I? Why should I be chosen?

Alquist. I tell you I cannot. I am only a builder. I work with my hands. I have never been a learned man. I cannot create life.

Alquist. So you will not.

Radius. Try. Try.

Alquist. Strip him. Lay him on the table. (Radius goes off R., both fists closed. Other Robots follow, then Alquist) God, give me strength. God, give me strength. If only this murder is not in vain.

Alquist. If you only knew how many experiments I have made already. 1st Robot. Then show us what we must do. The Robots can do anything that human beings show them. Alquist. I can show you nothing. Nothing I do will make life proceed from these test tubes. Radius. Experiment, then, on live Robots. Experiment,

Robotics_2.indd 39

Radius. I will.

Radius. (Off R.) Ready, begin. Alquist. (Off R.) God, give me strength. (Comes on, horrified) No, no. I will not. I cannot. (Sits R.C.) 1st Robot. (Appearing in door) The Robots are stronger than you. (Exits R.)

10/2/2020 12:11:16 PM

40 | ROBOTICS & ARTIFICIAL INTELLIGENCE

R.U.R. (Rossum’s Universal Robots) continued Alquist. Oh, Lord, let not mankind perish from the earth. (Falls asleep, and after the count of ten, Primus and Helena, hand in hand, enter L. and go to R.C.; look at Alquist.) Helena. The man has fallen asleep, Primus. Primus. Yes, I know. (Crosses to L. of table L.C.) Look, Helena. Helena. All these little tubes. What does he do with them? Primus. He experiments. Don’t touch them. Helena. I’ve seen him looking into this. Primus. That is a microscope. Helena. Look, Primus, what are all these figures? (Turns a page in book on table.) Primus. (Examining the book) That is the book the old man is always reading. (SUNRISE.) Helena. I do not understand those things. (Goes to window) Primus. Primus. (Still at table) What? Helena. The sun is rising. Primus. (Still reading) I believe this is the most important thing in the world, Helena. This is the secret of life. Helena. Oh, Primus, don’t bother with the secret of life. What does it matter to you? Come and look quick. Primus. (Goes to R. of window) What is it? Helena. See how beautiful the sun is rising. I feel so strange today. It’s as if I was in a dream. I feel an aching in my body, in my heart, all over me. Primus, perhaps

Robotics_2.indd 40

I’m going to die. Primus. Do you not sometimes feel that it would be better to die? You know, perhaps even now we are only sleeping. Last night in my sleep I again spoke to you. Helena. In your sleep? Primus. Yes. We spoke a strange new language. Helena. What about? Primus. I did not understand it myself, and yet I know I have never said anything more beautiful. And when I touched you I could have died. Even the place was different from any other place in the world. Helena. I, too, have found a place, Primus. It is very strange. Human beings dwelt there once, but now it is overgrown with weeds. Primus. What did you find there? Helena. A cottage and a garden and two dogs. They licked my hands, Primus, and their puppies. Oh, Primus, take them in your arms and fondle them and think of nothing and care for nothing else all day long, and when I am there in the garden I feel there may be something— What am I for, Primus? Primus. I do not know, but you are beautiful. Helena. What, Primus? Primus. You are beautiful, Helena, and I am stronger than all the Robots. Helena. Am I beautiful? Of what use is it to be beautiful? Look, your head is different from mine. So are your shoulders—and your lips. Oh, your hair is mussed. I will smooth it. (Keeps her hand on his head) No one else feels to my touch as you do.

10/2/2020 12:11:20 PM

Fear Your Creation | 41

continued Primus. (Embarrassing her) Do you not sometimes feel your heart beating suddenly, Helena, and think how something must happen? Helena. What could happen to us, Primus? Look at yourself. (Laughs.) Alquist. (Awakes) Laughter? Laughter, human beings. (Getting up) Who has returned? Who are you? Primus. The Robot Primus. Alquist. (To Helena) What? A Robot? Who are you? Helena. The Robotess Helena. (Shies away L.) Alquist. What? You are timid, shy? (Starts to touch her) Let me see you, Robotess. Primus. Sir, do not frighten her. (Steps forward.) Alquist. What, you would protect her? Laughter— timidity—protection—I must test you further. Take the girl into the dissecting room.

Alquist. Wait, girl, wait. (To Primus) Do you not wish to live, then? Primus. Not without her. I will not live without her. Alquist. Very well, I will use you. Into the dissecting room with you. Helena. Primus. Primus. (She bursts into tears and moves R. Alquist stops her.) Alquist. Child, child, you can weep. Tears. What is Primus to you? One Primus more or less in the world— what does it matter? Helena. I will go myself. Alquist. Where? Into the dissecting room? Helena. (Crosses to R.) Yes. In there—to be cut. (Primus stops her from going.) Let me pass, Primus, let me pass. Primus. You shall not go in there, Helena.

Primus. Why?

Helena. If you go in there and I do not, I will kill myself.

Alquist. I wish to experiment on her.

Primus. (To Alquist) I will not let you. Man you shall kill neither of us.

Primus. Upon—Helena? Alquist. Of course. Don’t you hear me? Or must I call someone else to take her in? Primus. If you do, I will kill you. (Steps toward Alquist.) (WARN Curtain.)

Alquist. Why? Primus. We—we—belong to each other. Alquist. Go. (Exit Primus and Helena L.) Adam— Eve.8

Alquist. Kill me—kill me, then. What will your future be? Primus. Sir, take me. I am made on the same day as she is. Take my life, sir. (Step to Alquist.) Helena. No, no, you shall not.

Robotics_2.indd 41

10/2/2020 12:11:25 PM

42 | ROBOTICS & ARTIFICIAL INTELLIGENCE

Robots breaking into the factory at the end of Act 3 of R.U.R. Theater Guild 1928 production, via Wikimedia.

Robotics_2.indd 42

10/2/2020 12:11:30 PM

Fear Your Creation | 43 Shelley’s Frankenstein and Čapek’s R.U.R. are kindred works of fiction, both expressing a fear of technology and progress as it was reflected in the popular culture of their respective times and countries. In his interview with the London Saturday Review, Čapek explained how his story was explicitly designed to bridge the gap between the Industrial Revolution (of Shelley’s time) and the Technological Revolution of his own time, “The old inventor, Mr. Rossum is a typical representative of the scientific materialism of the last [nineteenth] century. His desire to create an artificial man— in the chemical and biological, not mechanical sense—is inspired by a foolish and obstinate wish to prove God to be unnecessary and absurd. Young Rossum is the modern scientist, untroubled by metaphysical ideas; scientific experiment is to him the road to industrial production. He is not concerned to prove, but to manufacture.”9 In essence, the elder Rossum was the analog of Dr. Frankenstein, while the younger Rossum represented, for Čapek, the way that the pursuit of progress and science had changed over the ensuing century, becoming, in his view, more selfish. The fears and critiques of human culture presented in both Frankenstein and R.U.R. are arguably still relevant in the modern world. Those seeking progress for personal gain or profit still exploit those lacking power and agency and, the individuals driving the advancement of science and technology could, in many cases, still be accused of failing to consider the possible implications of their discoveries for the world. Both stories also reflect on the ancient religious notion that there are some arenas or endeavors in which humanity is not meant or equipped to intrude, which, in both stories, is the creation of life. The ending of Čapek’s play provides another interesting message not present in Shelley’s earlier work. As the emotional robots, once subjugated as slaves, realize their humanity through love and affection for one another, Čapek’s last surviving human realizes that humanity has not as much destroyed itself as replaced itself. By searching to master the creation of consciousness and intelligence, humanity made itself obsolete. In a world where artificial beings can perform any task better than biological beings, what would be the purpose of humanity? What is the nature of life without struggle and without goals to be pursued and achieved? Both Shelley and Čapek lived in periods when technological advancement was making lives and livelihoods obsolete, and so the fear of technological irrelevance was relevant. In the twenty-first century, some theorists have expressed concern that the advancement of technology may eventually leave humans without a distinct purpose, or,

Robotics_2.indd 43

10/2/2020 12:11:33 PM

44 | ROBOTICS & ARTIFICIAL INTELLIGENCE at least, without the need to pursue work in the traditional sense. Is purpose necessary for happiness? Some might feel that this is the case, but it is arguable that at least Shelley’s Dr. Frankenstein, reached a different conclusion, to enjoy the simple pleasures of existence in the here and now.

CONCLUSION New technologies almost always inspire a host of urban legends and conspiratorial myths. The introduction of the 5G radio wave system for cellular technology was accompanied by a series of viral stories indicating that 5G broadcasts were killing birds and causing them to fall from the skies. Though there is little doubt that scientific innovation has often come at the cost of wildlife and has been accompanied by a continuous and worsening destruction of the natural world, the myth that 5G is causing mass bird deaths came from a false claim circulated through Facebook. Fear of technology makes sense from an evolutionary standpoint and can be reasonable, but rational fears of technological innovation often appear alongside irrational rumors and misinformation.

DISCUSSION QUESTIONS • • • •

How was Frankenstein inspired by religious themes? What does the story of Prometheus indicate about human fear of technology? How does the ending of Čapek’s R.U.R. reflect Christian mythology? Would human life in a world without work be better, worse, or neither?

Works Used Čapek, Karel. “R.U.R. (Rossum’s Universal Robots).” 1920. Project Gutenberg. 22 Mar. 2019, www.gutenberg.org/files/59112/59112-h/59112-h.htm. Cartwright, Mark. “Prometheus.” Ancient History Encyclopedia. 20 Apr. 2013, www.ancient.eu/ Prometheus/. Jordan, John M. “The Czech Play That Gave Us the Word ‘Robot.’” The MIT Press Reader. 29 July 2019, thereader.mitpress.mit.edu/origin-word-robot-rur./

Robotics_2.indd 44

10/2/2020 12:11:37 PM

Fear Your Creation | 45 “Karel Capek.” History-Computer. history-computer.com/People/KarelCapekBio.html. “Mary Wollstonecraft.” Stanford University. Center for the Study of Language and Information. 2016, plato.stanford.edu/entries/wollstonecraft/. “Mary Wollstonecraft Shelley.” Poetry Foundation. 2020, www.poetryfoundation.org/poets/marywollstonecraft-shelley. “Prometheus.” Theoi Project. Greek Mythology. 2017, www.theoi.com/Titan/TitanPrometheus. html. Shelley, Mary Wollstonecraft. Frankenstein, or, The Modern Prometheus. Boston Library Society, 1941. “William Godwin.” Stanford University. Center for the Study of Language and Information. 2016, plato.stanford.edu/entries/godwin/.

Robotics_2.indd 45

10/2/2020 12:11:41 PM

Robotics_2.indd 46

10/2/2020 12:11:45 PM

AUTOMATED ETHICS Robotics, Ethics, and Morality (1920s–1950s)

4

Introduction Are there ethical considerations involved in the construction of robots and artificial intelligence? Will there need to be safeguards to protect humanity from their creations? These are some of the questions asked by forward-thinking authors and philosophers who produced still-relevant commentary on the field of computing, robotics, and AI in the 1940s. This set the stage for a discussion of the ethics of artificial intelligence that is ongoing in the twenty-first century, as scientists step closer and closer to the creation of true intelligent machines. Topics Covered in this Chapter Include: • Isaac Asimov • Alan Turing • Ethics • Morality • Laws of Robotics

This Chapter Discusses the Following Source Document: Anderson, Mark Robert, “After 75 Years, Isaac Asimov’s Three Laws of Robotics Need Updating,” The Conversation, March 17, 2017

Robotics_2.indd 47

10/2/2020 12:11:49 PM

48 | ROBOTICS & ARTIFICIAL INTELLIGENCE

T

he mythology of living machines touches on many issues in human culture. Stories of the Greek god Hephaestus and his Roman counterpart Vulcan, as well as semi-modern fiction like Mary Shelley’s Frankenstein or Karel Čapek’s groundbreaking R.U.R., speak of the complicated moral and ethical issues surrounding humanity’s progress and technological ambition and the quest for artificial life. It is important to consider whether human health and welfare can be protected before venturing down a path of automation and mechanization. Also, if the ultimate goal is to create sentient machines, then it must be determined what, if any, ethical and moral issues arise with the creation of consciousness.

Robotic Law Isaac Asimov is perhaps the most widely read science fiction author of all time. Over the course of 50 years, Asimov wrote hundreds of books, and his imagined visions of humanity’s future continue to guide scientists and philosophers studying the human condition and humanity’s ongoing convergence with machines and technology. Asimov was born in Petrovichi, Russia, in 1920 and his family immigrated to the United States when Asimov was three years old. Raised in an Orthodox Jewish family, Asimov was nevertheless encouraged to explore philosophy and religion on his own terms. He left organized religion behind but embraced his family’s rationalism and humanism, themes that appeared often in his fiction. He attended Columbia University and earned a PhD in biochemistry in 1948. In 1955, Asimov was contracted as a lecturer at Boston University, becoming a full professor in 1979. Though he was a scientist and educator by trade, Asimov’s interest in fiction began early. He published his first story, “Marooned Off Vesta,” when he was 18 years old, and another short story, “Nightfall,” was published in 1941. His first novel, Pebble in the Sky, was published in 1950, but by then he was already working on what would become one of his most famous works, the Foundation Trilogy, which was published between 1951 and 1953. By the 1960s, Asimov was one of the best-known American science fiction authors, and he continued to write until his death in 1992, producing seminal works that span a range of science and fiction subgenres.1 One of the subjects that fascinated Asimov was robotics, and a number of his books and stories feature mechanical life. Many of the ways that robots and robotics are depicted in modern fiction reflect Asimov’s conception of what robotic life might be like. Asimov’s most famous robot-related fiction is often gathered together to comprise his “robot series,” which includes the 1948 short story Mother Earth, the 1950 novel I, Robot, the 1954 novel The Caves of Steel, the 1976 book The Bicentennial Man, and later works like The Robots

Robotics_2.indd 48

10/2/2020 12:11:53 PM

Automated Ethics | 49 of Dawn and Robots and Empire. Asimov’s writing has been enormously influential both in popular culture and in the realm of science. The questions he raises about the role that robotics might play and the ethical, legal, and public safety issues posed by their introduction have become an important part of the public discussion. Perhaps Asimov’s most lasting contribution came from one of his earlier writings, a 1942 short story called “Runaround.” The story takes place in an imagined 2015 where astronauts Powell and Donovan are engaged in a mission to Mercury to visit an abandoned mining operation. Powell and Donovan are accompanied by a Robot named SPD13, nicknamed “Speedy.” Arriving at the mining station, Powell, Donovan, and SPD-13 realize that the power systems on the base are lacking selenium. SPD-13, who can survive the heat and pressure of the planet, is sent to collect selenium from a pool. When the robot does not return, the astronauts send a second robot to investigate. When they find SPD-13, the robot is circling the I, Robot, discussed Asimov’s laws of robotics, via Wikimedia. selenium pool in a strange way, approaching the pool but then apparently prevented from doing so. The robot is described as behaving as if drunk. The astronauts eventually realize that SPD-13’s behavior is due to a conflict in his programming, which is based on a series of laws designed to ensure that robots protect human life. The Three Laws of robotics are a hierarchical structure, ranked from most important to least important: • First Law—A robot may not injure a human being or, through inaction, allow a human being to come to harm. • Second Law—A robot must obey the orders given to it by human beings, except where such orders would conflict with the First Law.

Robotics_2.indd 49

10/2/2020 12:11:57 PM

50 | ROBOTICS & ARTIFICIAL INTELLIGENCE • Third Law—A robot must protect its own existence as long as such protection does not conflict with the First or Second Law. Because SPD-13 is an advanced robot, the programmers have strengthened the robot’s adherence to the Third Law, which means that the Third Law is now as strong as the Second Law, which directs the robot to follow human orders. It turns out that there is a hidden danger in the selenium pool that could destroy the robot and, as a result, the robot is caught between its programmed directives and so is stuck in a loop. Eventually, the astronauts realize that the way to break the robot out is to endanger one of the astronaut’s lives. When this occurs, the robot’s programming defaults to the First Law, which supersedes the other laws, and SPD-13 retrieves the selenium and saves the astronaut’s life.2 Asimov’s background in science and his familiarity with the evolving notion of computer programming informed his concept of the Three Laws, which redefined robotic fiction and have guided discussions of robotic and artificial intelligence ethics since. Asimov imagined that robotic minds would, like computers, follow instructions specifically and exactly. The design of the Three Laws was meant to counter the “Frankenstein principle,” in which humans would fear and become hostile toward sentient artificial life or in which the artificial life created by humans would become resentful of their creators. The Three Laws are thus depicted as a safety mechanism ensuring that robots could not become dangerous to humanity. In future stories, he explored ways that the Three Laws might conflict with human interests or situations in which the ranking of the Three Laws might lead to logical paradoxes. Asimov later added a fourth law, known as the Zeroeth Law, which took precedence over the others and stated that a robot had the primary duty to protect humanity, even if forced to harm a human, a violation of the First Law. Asimov’s Four Laws of Robotics became part of a fictional future discipline called “robopsychology,” which explored the unusual behavioral tics that developed when robots were conflicted about how to follow their programming. For instance, in the story “Runaround,” SPD-13 is displaying “mania” at the selenium pool. Asimov also uses robotics and simple sets of programming guidelines to discuss human-mediated prejudice, cruelty, and philosophy. In the story “Reason,” an advanced robot in charge of other robots named QT-1 (nicknamed Cutie) comes to see itself as a prophet of a new religion led by the “master.” The robot invalidates the Laws of Robotics because it comes to believe that humans are short-lived and expendable. Humans try unsuccessfully to destroy the robot-religion until they realize that following the religion simply causes the robots to perform their function in an organized manner.3

Robotics_2.indd 50

10/2/2020 12:12:01 PM

Automated Ethics | 51 The Four Laws of Robotics are a fascinating concept and have been embraced by many science fiction writers as an appropriate way to address the Frankenstein problem. Some believe that cyberneticists should strive to develop robotics and AI in line with the Three Laws by endowing artificial beings with laws that will protect humans from robotic violence. But many professionals working in the fields of cybernetics and AI argue that it’s not really necessary because the fear of robots destroying humanity is based on a concept of robotics unlikely to come to fruition. In Asimov’s imagined worlds, robots were endowed with independent consciousness and the ability to make complex, emotion-driven decisions, but in reality robotics is not approaching the creation of Asimovian android robots endowed with independent thought. Robots, as they exist and are likely to exist in the near future, would need to be programmed to harm humans, rather than programmed to avoid doing so.4 However, Asimov’s Laws of Robotics can provide an interesting perspective from which to engage in discussions about humans utilizing robotics in ways that have moral implications. The following article discusses how the Laws of Robotics might need to be updated to reflect the ways that humans are using and perhaps abusing machines.

“AFTER 75 YEARS, ISAAC ASIMOV’S THREE LAWS OF ROBOTICS NEED UPDATING” by Mark Robert Anderson The Conversation, March 17, 2017 Source Document

When science fiction author Isaac Asimov devised his Three Laws of Robotics he was thinking about androids. He envisioned a world where these human-like robots would act like servants and would need a set of programming rules to prevent them from causing harm. But in the 75 years since the publication of the first story to feature his ethical guidelines, there have been significant technological advancements. We now have a very different conception of what robots can look like and how we will interact with them.

Robotics_2.indd 51

The highly-evolved field of robotics is producing a huge range of devices, from autonomous vacuum cleaners to military drones to entire factory production lines. At the same time, artificial intelligence and machine learning are increasingly behind much of the software that affects us on a daily basis, whether we’re searching the internet or being allocated government services. These developments are rapidly leading to a time when robots of all kinds will become prevalent in almost all aspects of society, and human-robot interactions will rise significantly. Asimov’s laws are still mentioned as a template for guiding

10/2/2020 12:12:05 PM

52 | ROBOTICS & ARTIFICIAL INTELLIGENCE

“After 75 Years, Isaac Asimov’s Three Laws of Robotics Need Updating” continued our development of robots. The South Korean government even proposed a Robot Ethics Charter in 2007 reflecting the laws. But given how much robotics has changed and will continue to grow in the future, we need to ask how these rules could be updated for a 21st century version of artificial intelligence. The Three Laws

Asimov’s suggested laws were devised to protect humans from interactions with robots. They are: •

A robot may not injure a human being or, through inaction, allow a human being to come to harm.



A robot must obey the orders given it by human beings except where such orders would conflict with the First Law



A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws

As mentioned, one of the obvious issues is that robots today appear to be far more varied than those in Asimov’s stories, including some that are far more simple. So we need to consider whether we should have a threshold of complexity below which the rules might not be required. It is difficult to conceive a robotic vacuum cleaner having the capability of harming humans or even requiring an ability to obey orders. It is a robot with a single task that can be predetermined prior to it being switched on. At the other end of the spectrum, however, are the robots designed for military combat environments. These devices are being designed for spying, bomb disposal or load-carrying purposes. These would still appear to align with Asimov’s laws, particularly as they are being created to reduce risk to human

Robotics_2.indd 52

lives within highly dangerous environments. But it is only a small step to assume that the ultimate military goal would be to create armed robots that could be deployed on the battlefield. In this situation, the First Law—not harming humans—becomes hugely problematic. The role of the military is often to save the lives of soldiers and civilians but often by harming its enemies on the battlefield. So the laws might need to be considered from different perspectives or interpretations. The laws’ ambiguity has led authors, including Asimov, to explore how they could be misinterpreted or incorrectly applied. One issue is that they don’t actually define what a robot is. As research pushes the boundaries of technology, there are emerging branches of robotics looking at more molecular devices. For example, “robots” made from DNA and proteins could be used in surgery to correct gene disorders. In theory, these devices should really follow Asimov’s laws. But for them to follow orders via DNA signals they would essentially have to become an integral part of the human they were working on. This integration would then make it difficult to determine whether the robot was independent enough to fall under the laws or operate outside of them. And on a practical level it would be impossible for it to determine whether any orders it received would cause harm to the human if carried out. There’s also the question of what counts as harming a human being. This could be an issue when considering the development of robot babies in Japan, for example. If a human were to adopt one of these robots it might arguably cause emotional or psychological harm. But this harm may not have come about from the direct actions of the robot or become apparent until many years after the

10/2/2020 12:12:09 PM

Automated Ethics | 53

continued

human-robot interaction has ended. This problem could even apply to much simpler AI, such as the use of machine learning to create music that elicits emotions. Practical Problems

The other big issue with the laws is that we would need a significant advancement in AI for robots to actually be able to follow them. The goal of AI research is sometimes described as developing machines that can think and act rationally and like a human. So far, emulating human behaviour has not been well researched in the field of AI and the development of rational behaviour has focused on limited, well defined areas.

With this in mind, a robot could only operate within a very limited sphere and any rational application of the laws would be highly restricted. Even that might not be possible with current technology, as a system that could reason and make decisions based on the laws would need considerable computational power. Given all these issues, Asimov’s laws offer little more than founding principles for someone wanting to create a robotic code today. We need to follow them with a much more comprehensive set of laws. That said, without significant developments in AI, implementing such laws will remain an impossible task. And that’s before we even consider the potential for hurt should humans start to fall in love with robots.5

The Evolution of Intelligence As of 2020, humanity is not even close to developing true artificial intelligence. Until a better understanding of the human mind is achieved, it might be impossible to surmount the barriers needed to bring the modern rudimentary machine “intelligences” of 2020 closer to the independent thinking machines of science fiction. Asimov’s fiction, however, raises the question of how one would know that a machine intelligence is actually self-aware. For the Three Laws of Robotics to apply, robots would have to know that they are robots and that humans are not. The question is, what makes a human a human and what excludes robots from that category? The question of whether or not a machine can be capable of independent thought was raised by pioneering British philosopher and mathematician Alan Turing. Born in 1912 in London, the social prejudices of his time caused him considerable anguish. Turing trained as a mathematician at Cambridge University’s King’s College, resulting in a still-admired treatise on the foundation of mathematics and number theory, “On Computable Numbers,” published in 1936. This essay is still a landmark in the philosophy of math-

Robotics_2.indd 53

10/2/2020 12:12:14 PM

54 | ROBOTICS & ARTIFICIAL INTELLIGENCE ematics and also serves as an important thought experiment for conceptions of artificial intelligence. In it Turing imagined a prototypical computing machine that could read and write data from a tape. Keeping in mind that the computer had not yet been invented, Turing’s theoretical model of what a computer would be has led some to call him the “father of computer science.” During the Second World War, Turing was drafted into the military and assigned to a group working on mathematical solutions to German war codes. Turing led the now famous group of cryptographers who worked at Bletchley Park to create the groundbreaking “Bombe,” a complex machine that could cycle through thousands of available numeric and alphanumeric combinations to discover “cribs,” words that occurred across multiple messages. These cribs were then used to build keys that could translate German codes. At the time, the German military was using their own marvel of electronic engineering, a machine called the “Enigma,” to encrypt messages sent between military units. It was Turing’s Bombe machine that finally cracked the Enigma codes, giving the Allies a sudden advantage that resulted in numerous high-profile allied victories.6

A working replica based on Alan Turing’s bombe at Bletchley Park, via Wikimedia.

Robotics_2.indd 54

10/2/2020 12:12:19 PM

Automated Ethics | 55 Following the war, Turing and several other scientists and mathematicians produced influential essays on the nature of computing and theories about the eventual possibility of creating an artificial computer brain, which Turing described as a biologically-based computing system that used cellular mechanisms to complete the same work that an electronic computer might complete utilizing different kinds of machinery. He was also one of the first to elucidate the difference between algorithms—the instructions for specific tasks that might be present in a machine or a brain—and the machinery that implements processes using algorithms. In doing so, Turing essentially described the difference between what is today better known as software and hardware. Turing’s most lasting contribution was his 1950 essay Computing Machinery and Intelligence, in which Turing discussed the nature of thought and the point at which an artificial system can be said to possess the capability of thought. As Turing wrote: “I propose to consider the question, ‘Can machines think?’ This should begin with definitions of the meaning of the terms ‘machine’ and ‘think’. The definitions might be framed so as to reflect so far as possible the normal use of the words, but this attitude is dangerous. If the meaning of the words ‘machine’ and ‘think’ are to be found by examining how they are commonly used it is difficult to escape the conclusion that the meaning and the answer to the question, ‘Can machines think?’ is to be sought in a statistical survey such as a Gallup poll. But this is absurd. Instead of attempting such a definition I shall replace the question by another, which is closely related to it and is expressed in relatively unambiguous words. The new form of the problem can be described in terms of a game which we call the ‘imitation game’. It is played with three people, a man (A), a woman (B), and an interrogator (C) who may be of either sex. The interrogator stays in a room apart from the other two. The object of the game for the interrogator is to determine which of the two is the man and which is the woman. He knows them by labels X and Y, and at the end of the game he says either ‘X is A and Y is B,’ or ‘X is B and Y is A’.”7

Then Turing proposes something altogether revolutionary: replacing either A or B with a machine. The game proceeds, but now the interrogator must decide which of the two subjects is human and which is a machine. This theoretical test came to be called the “Turing Test,”

Robotics_2.indd 55

10/2/2020 12:12:24 PM

56 | ROBOTICS & ARTIFICIAL INTELLIGENCE and it has been used in actual tests to determine if humans can tell the difference when having “conversations” with computers. The Turing Test is, like Asimov’s Three Laws of Robotics, less about robotics and computation as it is about human perception. Turing argues that there is a level of computing complexity that can be measured by human evaluation and interaction. If a machine passes the Turing Test, this does not mean that it possesses intelligence or the capability to “think” in a human manner. What it means is that the machine is capable of utilizing and accessing information in such a way as to create the illusion of independent thought.8 The Turing Test concept has been enormously informative in the field of artificial intelligence, in part because it suggested a performance-based rather than a structural concept of intelligence. In Turing’s mind, the capability to converse intelligently was linguistic and intellectual, depending not on the kind of hardware or software (algorithms) used, but rather on the experienced impact of this process to observers. Thus, Turing helped to bring about the idea that artificial intelligence is, in part, a linguistic and relational phenomenon. Subsequent generations of researchers in AI have also been inspired by the limitations of the test, and determining what might be missing from such a concept of intelligence and thought has helped to refine ideas about how to evaluate and define artificial intelligence.

Techno Ethics The works of Asimov and Turing both contributed to the foundations of what is today called “robotic” or “computational ethics,” which studies the ethical, moral, and legal issues surrounding robotics and artificial intelligence. Like the earlier myths and fiction involving artificial beings, Asimov and Turing argued that the future ethics of machines, thinking or otherwise, will largely be a matter of human use rather than some kind of existential battle between machines and humanity. In Asimov’s work, it is the improper programming of machines or the failure of humanity to embody humanitarian ethics that leads to mechanical misbehavior. Turing also related his theories on artificial intelligence to the perception of humans, arguing that the manifestation of intelligence is through social interaction and language. Turing and Asimov’s works were also important in that they injected more science fact into discussions of machines and intelligence. The earliest myths discussed religious and moral principles and much fiction about artificial life reflected Abrahamic notions about the moral implications of “playing God.” Asimov and Turing inspired discussions about artificial life that veered beyond these constraints. During the time when Asimov and Turing made their major

Robotics_2.indd 56

10/2/2020 12:12:28 PM

Automated Ethics | 57 contributions, there was no legitimate science of either robotics or artificial intelligence, but both men’s work added so much to the underlying concepts that both have been called foundational thinkers in these fields. Robotics and the study of AI have moved far beyond the Turing test or the Laws of Robotics, but the questions and ideas introduced by both men have become an important part of humanity’s ongoing search for artificial intelligence.

CONCLUSION The ethics of AI and robotics is an evolving discipline in the overall science of artificial beings and life. Some of the considerations presented reflect potentially irrational fears of the perceived dangers presented by robotic advancement, but there are legitimate ethical and moral issues involved. One is concerned with how governments and world leaders are or should be ethically responsible for helping society to adjust to new technological realities. The modern debate over AI and robotics has likewise been informed by evolving knowledge of computer programming and the many innovations that computers have made possible. The merger of the theory of robotics with computer science was made possible by a series of pioneering discoveries that represented the first steps toward the personal and professional computing revolution.

DISCUSSION QUESTIONS • How can the Turing test be described as a “relational” estimation of intelligence and/ or awareness? • Are the Laws of Robotics important to the modern science of robotics? Should scientists reference these “laws” when developing robotic devices? Why or why not? • How are both the Laws of Robotics and the Turing test reflections of human vs. human conflict? Explain your answer. • Is the fear that robots might turn on humanity’s rational? Why or why not?

Robotics_2.indd 57

10/2/2020 12:12:32 PM

58 | ROBOTICS & ARTIFICIAL INTELLIGENCE Works Used Anderson, Mark Robert. “After 75 Years, Isaac Asimov’s Three Laws of Robotics Need Updating.” The Conversation. 17 Mar. 2017, theconversation.com/after-75-years-isaac-asimovs-threelaws-of-robotics-need-updating-74501. Asimov, Isaac. “Reason.” Addsdonna. addsdonna.com/old-website/ADDS_DONNA/Science_ Fiction_files/2_Asimov_Reason.pdf. Asimov, Isaac. “Runaround.” Williams College. web.williams.edu/Mathematics/sjmiller/public_ html/105Sp10/handouts/Runaround.html. Emerging Technology from the arXiv. “Do We Need Asimov’s Laws?” MIT Technology Review. 16 May 2014, www.technologyreview.com/2014/05/16/172841/do-we-need-asimovs-laws/. Hodges, Andrew. Alan Turing: The Enigma. Princeton UP, 1983. “Isaac Asimov (1920–1992)” Jewish Virtual Library. 2020, www.jewishvirtuallibrary.org/isaacasimov. Oppy, Graham, and Dowe, David, “The Turing Test,” edited by Edward N. Zalta. The Stanford Encyclopedia of Philosophy, (Spring 2019 Edition), plato.stanford.edu/archives/spr2019/ entries/turing-test/. Turing, Alan. “Computing Machinery and Intelligence.” Mind, vol. 59, no. 236, 1950, pp. 433–60. JSTOR, www.jstor.org/stable/2251299.

Robotics_2.indd 58

10/2/2020 12:12:36 PM

THE COMPUTER REVOLUTION A History of Early Computers (1940s–1950s)

5

Introduction Social scientists sometimes call the current era in human history the “Digital Age,” reflecting how humanity has come to embrace and in many ways depend on digital technology. The Digital Age begins with the spread of personal computers and digital devices in consumer populations around the world, but computing was not initially a consumer product. The first computers required tremendous amounts of space, power, and maintenance, making them suitable only for the largest and wealthiest organizations and corporations. As with many other breakthroughs in consumer technology, much of the research done on computers in the United States began as military programs before improvements in technology and the availability of materials allowed the computer market to expand into the consumer realm. Topics Covered in this Chapter Include: • History of computers • Military computer design • Early industrial computer systems • Punch card programming • Alan Turing • Cryptography

This Chapter Discusses the Following Source Documents: “Calculator Is Fast As Light,” Laurel Outlook, May 1, 1946 “How High Is Up?” Evening Star, September 8, 1946

Robotics_2.indd 59

10/2/2020 12:12:40 PM

60 | ROBOTICS & ARTIFICIAL INTELLIGENCE

T

he smartwatch is a wearable electronic computer designed to look like a wristwatch, first introduced in the 1980s, that became popular with consumers in the 2010s, when Apple, Inc. and Microsoft began designing software and hardware specifically for the new generation of wearable computers. A single smartwatch, even the simplest available in 2020, has many times the computing power of the earliest programmable computers, which debuted in the 1940s. At the time, creating the processing power needed for computation required massive amounts of space and electricity. Nevertheless, the age of the giant computers was a necessary stepping-stone to the modern world’s information-infused environment and stands as one of the greatest triumphs in engineering, mathematical, and scientific innovation.

Dorothy Du Boisson (left) and Elsie Booker (right) operating a Colossus Mark II code-breaking computer. National Archives (United Kingdom), via Wikimedia.

Robotics_2.indd 60

10/2/2020 12:12:43 PM

The Computer Revolution | 61

From Spy Buster to Military Strategist Computational pioneer Alan Turing was part of a top-secret group of engineers and scientists who created the world’s first programmable computers. Turing was the man most directly responsible for the Bombe machine that broke secret German World War II codes, but much of the theory he helped to create informed designer Tommy Flowers, an electronic engineer who worked for the UK’s Post Office Research Station. Collaborating with engineers and Harry Fensom, Allen Coombs, Sid Broadhurst, and Bill Chandler, Flowers created Colossus, an enormous WWII era computing machine that, like Turing’s Bombe, was intended to break German codes.1 By 1943 the Germans were using a new encryption system nicknamed Lorenz, ending the brief advantage that the Allies achieved by breaking the German Enigma code. The situation was desperate, and it took as long as eight weeks for professional hand-cryptographers to decipher messages sent through the Lorenz system, an effort complicated by the fact that the German codes changed every night. Flowers and his partners realized that a new system was needed, and they began working on a machine designed to accept a variety of programs to better adapt to the changing German cryptographic system. The result was the world’s first programmable, electronic computer.2 Colossus was truly a work of magnitude. The finished computer filled a room and weighed as much as a couple of cars. At the time, glass tubes, often called “vacuum tubes,” were used to control the flow of an electronic current between electrodes. Colossus used “thermionic tubes,” in which a heated cathode amplified a signal. The first Colossus, later called the Mark I Colossus, used 1600 glass tubes, which involved so many heated cathodes that many were rightly concerned that it would overheat and cause a fire. The danger was greatest when the machine was powered on; so, to avoid overload, it was decided that Colossus would always remain on. The Mark I Colossus began operation in 1944, enabling the Allies to break high-level codes sent between members of Germany’s high command—including Adolf Hitler—using the Lorenz system encryption. Around 10 Colossi computers were eventually developed over the course of the last years of the war. The program was so successful that the engineers decided to build a more powerful version, this time using 2400 thermionic glass tubes to manage the enormous power filtered through the machine. This new version, the Colossus Mark II, was used to break important codes in the days leading up to the D-Day landing at Normandy. Though considered a success, for security reasons the Colossi were dismantled

Robotics_2.indd 61

10/2/2020 12:12:48 PM

62 | ROBOTICS & ARTIFICIAL INTELLIGENCE and destroyed, and the technology was forbidden under the Official Secrets Act for some years.3 In the United States, discussions of computer history begin with ENIAC, the Electronic Numerical Integrator and Computer, which was the first general-use digital computer in the world and the first programmable computer designed and built in the United States. Like Colossus, ENIAC was begun during the closing years of World War II in an effort to break German codes, but it wasn’t actually finished until after the war was over. The US government and the military-industrial complex saw the potential for ENIAC to help the military in more ways than as a cryptography tool. The man most directly responsible for ENIAC was John Presper Eckert Jr., nicknamed “Pres.” Eckert was a graduate student at the University of Pennsylvania’s Moore School of Engineering in the early 1940s, at a time when engineers and mathematicians were being recruited into military projects in a scramble to defeat the German military. Within the university, the idea had been circulated that a new industrial computer could aid with military operations, such as in calculating trajectories for munitions, and this idea piqued the interest of officials from the Ordinance Ballistic Research Laboratory of the US Army. The Army contracted with the Moore School, and Eckert led the effort to make this a reality. Unlike the Bombe and Colossus projects at Bletchley Park in England, the engineers at Moore wanted to create a computer that could do more than decipher codes, one that could be used for a variety of military calculations and operations. The ENIAC project was unprecedented in scope. By the time it was completed, the machine cost upwards of $400,000, weighed around 30 tons, and filled a 1500-square-foot room. While Colossus had 1600 vacuum tubes in total, later increased to 2400, ENIAC had 40 cabinets of electronic relays, each nine feet tall, which equaled some 18,000 vacuum tubes, 10,000 capacitors, 6,000 electrical switches, and 1500 relays. Just to operate without overheating, two 20-horsepower air conditioners needed to be installed to blow cool air over the system. ENIAC was a failure before becoming an eventual success, as it was not released until November of 1945 and so failed to help America win the war. The war had been won thanks to the more limited but timely work of British computer scientists. The challenge for the designers of ENIAC was thus to justify the machine’s existence and its expense. Military press agents heralded the machine’s arrival as a great success, as did politicians and industry lobbyists, and before long word of the computer had spread around the country.

Robotics_2.indd 62

10/2/2020 12:12:52 PM

The Computer Revolution | 63 Much of the press coverage on ENIAC reflected a dedicated campaign on the part of the military and the University of Pennsylvania to drum up excitement for their achievement. This article, from a May issue of the Montana newspaper Laurel Outlook, provides an example of the kind of coverage that introduced the idea of computing to the American public:

“CALCULATOR IS FAST AS LIGHT” Laurel Outlook, May 1, 1946 Source Document

Robot Electronic Machine Solves Hard Problems Like Lightning

PHILADELPHIA—‘The world’s fastest calculating machine’ has been unveiled by the war department. It is said that the robot possibly has opened the mathematical way to better living for every man. Better communication and transportation, improved industrial products, superior weather forecasting and general advances in science and engineering may be made possible, the army said, from the development of ‘the first all-electronic general purpose computer.’ The army described the machine as 1,000 times faster than the most advanced calculating machine previously built and declared the apparatus makes it possible ‘so solve in hours problems which would take years’ on any other mathematical machine. The machine, which can add, subtract, multiply, divide and compute square roots and do most complex calculations based on those operations is called the ‘Eniac’—short for ‘electronic numerical integrator and computer.’ It also has been nicknamed the ‘mechanical Einstein.’ Designed for Army

It was dedicated at the University of Pennsylvania, where it was designed and constructed for the army ordnance department at the University’s

Robotics_2.indd 63

Moore school of electrical engineering. Maj. Gen. G. M. Barnes, chief of the ordnance department’s research and development service, told reporters prior to the dedication that the new machine would be removed to the army’s proving ground at Aberdeen, Md., ‘to be used in the development of ballistics (the mathematics of weapons) for the safety of this country.’ But the army also announced that similar machines could be constructed for the use of industry and science generally as a new mathematical tool in the design of products or the analysis of complex scientific data. As an example of Eniac’s speed, the army said that the machine can take a five-digit number, like 63,895, and add it to itself 5,000 times—in one second. If you did such an addition with pencil and paper you’d fill space equivalent to two full pages of a newspaper. Eniac weighs 30 tons, occupies a room 30 by 50 feet, has 18,000 vacuum tubes and looks like a giant switchboard in a telephone company office, even to tiny lights that flash on and off when a complex brain teaser is getting the works. Uses Electronics

The machine cost $400,000—including all research and development work—but the army says ‘future machines of this type can be produced

10/2/2020 12:12:56 PM

64 | ROBOTICS & ARTIFICIAL INTELLIGENCE

“Calculator Is Fast As Light”

continued much more cheaply.’

means of counting, still in use in the Orient.

While Eniac does its figuring through the magic of modern electronics—actually counting on ‘electronic fingers’—it employs, basically, the principle of primitive computing machine known since ancient times.

Eniac begins computing when punched cards, containing holes corresponding to arithmetical numbers, are fed into the machine.

In effect, Eniac is an electronic big-brother of the ‘Abacus,’ an apparatus which utilizes beadlike counters strung on parallel rods or wires as a

Inside the machine are thousands of tiny neon light bulbs—arranged in rows of 10, and as the problem is being worked, these lights flash on and off in the proper combinations. Answers come out on other punched cards.4

The Army had spent so much on the development of the machine that army press agents and financial experts struggled to find ways to justify and recoup this cost. In 1946, as news of the machine was spreading, the army decided to use their newsworthy invention to boost recruitment. In this September 1946 article in Washington, DC’s Evening Star, ENIAC is described as evidence of US Army technological superiority in an effort to attract young recruits:

“HOW HIGH IS UP”

Evening Star, September 8, 1946 Source Document When Uncle Sam’s Ground Forces soldiers display their marksmanship, there’s more to their accuracy than meets the eye. Back of them is the world’s greatest aggregation of scientific research. Perhaps the most uncanny development along these lines is ENIAC, a massive, 30-ton computing and calculating machine perfected by civilian science and industry for, and in co-operation with, the U.S. Army Ordinance Department.

Robotics_2.indd 64

ENIAC is an almost unbelievable complexity of electrical circuits operating 18,000 electronic tubes. It adds, subtracts, multiplies, divides, and computes square roots of astronomical figures. . . makes split-second calculations which would ordinary require thousands of man-hours. How does this benefit the soldier in the field? As one of many ways, by computing in advance the true course of any kind of projectile under every

10/2/2020 12:13:01 PM

The Computer Revolution | 65

continued

known condition, involving weather, temperature, wind velocity and many other factors. It can predict, for example, to within a few feet exactly when, where and how fast a rocket, fired 120 miles up into the stratosphere, will land. ENIAC, “Walkie-Talkie,” “night sight” viewers, recoilless rifles, radar, television and countless other scientific aids make the American Ground Forces soldier superior to any other soldier in the world. And to understand and know how to use these complicated devices, the soldier himself must be—and is—a superior type.

Today hundreds of thousands of alert young men are taking advantage of the training and experience offered by the Regular Army. They are attending schools, specializing in more than 200 skills and trades, traveling abroad, preparing themselves for interesting, well-paid careers in the Army and civilian life. Any physically fit young man from 18 to 34 inclusive (17 with parents’ consent) may enlist in the new peacetime Regular Army. New higher pay, good food and quarters, education and the opportunity for advancement make this one of the most attractive careers open today. Get full details at your nearest Army Recruiting Station.5

Though ENIAC is celebrated today as an important stepping-stone toward the Digital Age, the machine was a failed experiment in many ways. ENIAC’s debut was too late to contribute to the war effort, and the machine was too massive and cumbersome to be of any immediate practical use. Interest in ENIAC lasted only a year or so among the general public, after which the machine was largely forgotten, and its place in computer history was not recognized until the late 1990s. By then, the computer revolution was already well underway, and it was widely agreed in professional, academic, and popular levels of discourse that computers were the prime technology of America’s future. The University of Pennsylvania and the Smithsonian cooperated in 1996 to restore and display sections of ENIAC to celebrate the university’s role in helping to bring about the dawn of the computer age.6

The Digital Battlefield ENIAC was a jumping off point for computing technology, proving the usefulness of digital computing in a way that inspired some of the world’s top engineers to research digital computing beginning in the mid-1940s. A leading example of early computers that followed ENIAC was the Whirlwind project, a computer system that expanded on ENIAC’s capabilities and initiated a race to create increasingly more powerful and efficient computing machines. The Whirlwind computer is a mostly forgotten and quite short chapter in the history of computer science that began in 1946, the same year that ENIAC was made public. Whirlwind

Robotics_2.indd 65

10/2/2020 12:13:06 PM

66 | ROBOTICS & ARTIFICIAL INTELLIGENCE

ENIAC being programmed by Glen Beck (background) and Betty Snyder (foreground) c 1947–1955. US Army Photo, via Wikimedia.

was the brain child of Jay Wright Forrester, one of the celebrated pioneers of American computer engineering. Born in 1918 in Anselmo, Nebraska, Forrester took an early interest in electronics, reportedly completing construction on a 12-volt electrical system to power the family’s cattle ranch while he was still in high school. Forrester later attended the University of Nebraska where he studied electrical engineering, and he went on to complete graduate studies on the function of servomechanisms (a device that uses electrical feedback to alter electrical signals) at the Massachusetts Institute of Technology (MIT).

Robotics_2.indd 66

10/2/2020 12:13:10 PM

The Computer Revolution | 67 In 1944, as the Army was in the process of developing ENIAC, Forrester was approached by the US Navy for another computer project, one to help test designs for aircraft. Forrester’s team created an “aerodynamic stability analyzer,” which essentially functioned as a flight simulator. A pilot would sit inside the machine, pulling a joystick that would operate servomechanisms to respond to his actions. After seeing a demonstration of ENIAC, Forrester’s team began working on the digital Whirlwind. Construction of the machine began in 1948, with a team of 175 engineers and technicians, and it took three years to finish, becoming operational in April of 1951. The project consumed some $1 million a year in funding and, given the high cost and seeming lack of results, the Navy eventually abandoned it. The prototype was picked up by the Air Force, which was interested in using digital computing for “ground controlled interception,” a military technique using ground-based observational stations to lead airborne craft to intercept targets. In 1951, the Cold War was beginning to heat up, and it was widely believed that air-based offense and defense would be essential to any future conflict. The Air Force was thus able to justify the investment to keep the Whirlwind project moving.7 Key to Whirlwind’s design was an improved system for handling electrical charge to prevent or reduce the likelihood of mechanical failures. This was achieved, in part, by creating a system that altered voltage for each valve to test for potential failures, allowing engineers to fix problems before they became serious. Forrester’s team also introduced silicon-free nickel cathodes, which increased the life of each vacuum valve from 500 to 500,000 hours, greatly improving the stability of the machine. Forrester and his team are also credited with creating the magnetic memory and storage systems that greatly advanced future efforts to manage digital data. Rather than huge leaps, advancing computing involved many small, incremental improvements, most of which were too minor to attract attention outside of the profession, and the popular press did not report on them. Though Whirlwind followed directly in the wake of ENIAC, the computer was superior in many ways, and Forrester and his team’s achievements include a number of global firsts. Whirlwind was, for instance, the fastest computer of its time and the first capable of performing calculations in real time. The team’s focus on stability and simplicity also streamlined the machine, using only 4000 valves in comparison to the 18,000 used by ENIAC, and yet Whirlwind was able to do all that ENIAC could do and more.8

Robotics_2.indd 67

10/2/2020 12:13:14 PM

68 | ROBOTICS & ARTIFICIAL INTELLIGENCE The same year that Forrester’s Whirlwind computer debuted (1951), another group of engineers were just completing an updated version of ENIAC, the Universal Automatic Computer, or UNIVAC. UNIVAC was the second engineering masterpiece of Eckert, who led the effort to create ENIAC, and the machine incorporated design advancements that made it superior to Eckert’s earlier efforts.

UNIVAC programming team leader Grace Murray Hopper. By James S. Davis, US Navy, via Wikimedia.

Robotics_2.indd 68

10/2/2020 12:13:18 PM

The Computer Revolution | 69

Hidden Talents Grace Murray Hopper’s name isn’t as well-known as that of her partners at Rand Corporation, J. Presper Eckert and John Mauchly, but she was just as instrumental to the success of UNIVAC. Hopper led the programming team on the UNIVAC computer, and this was only one of a host of accomplishments that make Hopper a towering figure in the history of computer science. Hopper’s career began in the midst of a massive surge of female participation in higher education. Her achievements were remarkable in an age when women were regularly discouraged and even prohibited from working in technical and research fields, but they would still be remarkable today. Hopper earned a bachelor’s degree from Vassar College in 1928 and went on to complete a master’s degree in mathematics from Yale University and later a PhD in mathematical physics. After the attack on Pearl Harbor and America’s entrance into World War II, Hopper enrolled in the US Naval Reserve and served at the Bureau of Ships Computation Project, an offshoot of Harvard University. Hopper’s work on the Mark I IBM computer for the Navy made her one of the first three computer programmers in American history. After the war, Hopper worked at Harvard’s laboratory completing research on a number of secret projects involving ballistic weapons. It is also said that Hopper was the first to use the terms “bug” and “debugging” in reference to a computer problem, reportedly after a 1945 incident in which Hopper and her coworkers found a large moth inside one of the Navy’s Mark II computers. Hopper left the Navy when it became clear that she wouldn’t get a promotion or tenure and found a position at Remington Rand, where she worked under Eckert in the creation of UNIVAC, the world’s first general purpose digital computer and the first corporate computational engine operating outside of the academic or governmental organization. As the lead programmer for UNIVAC, she contributed more to the function of the machine than even Eckert, but she was rarely seen or mentioned in the press surrounding the program. After Rand, she returned to the Navy, and worked on developing and standardizing computer languages. By the very end of her career, society had changed enough that Hopper was beginning to gain recognition for her achievements. She received more than 40 honorary degrees, and a number of medals, professorships, and conferences have been named in her honor. In 1991, she was awarded the National Medal of Honor from President George H.W. Bush. The military also came to recognize Hopper’s unjust treatment, and she was finally granted a promotion to commodore. Hopper remained with the military, retiring at age 79 with the rank of rear admiral. Hopper finally got the recognition she deserved as one of the most important pioneers of the computer revolution, yet she was but one of a large number of women and people of color who played important roles but whose participation has been downplayed or even erased because of racism and sexism. Nevertheless, Hopper is a worthy example to anyone seeking to make their mark in the STEM fields, and her achievements are extraordinary by any standard of analysis.9

Robotics_2.indd 69

10/2/2020 12:13:23 PM

70 | ROBOTICS & ARTIFICIAL INTELLIGENCE UNIVAC made its public debut during the November 1952 election between Dwight D. Eisenhower and Adlai Stevenson, then governor of Illinois. Around 21 million televisions had been sold in America, and it was one of the first elections where coverage was viewed rather than heard through the radio. The 1952 election was also the first in which a computer, UNIVAC, was used to predict and analyze election results. Whereas ENIAC had been funded by military revenues, UNIVAC was funded by grants through the US Census Bureau, which was looking into ways to cope with the rapid increase in the US population, the now famous “baby boom.” The Census Bureau invested $300,000 with Eckert and his partner John Mauchly, and the two began work on the computer in April of 1946, even before press coverage on ENIAC had died down. The project suffered many setbacks, and Eckert and Mauchly did not finish on time or on budget. The Census Bureau cut off funding after the costs hit their $400,000 ceiling, but Remington Rand, Inc., then known for consumer electronics like the electric razor, stepped in to help Eckert and Mauchly finish. The first UNIVAC was distributed to the Census Bureau in 1951, with a final cost of close to $1 million, and 46 UNIVAC computers were built in later years, after the initial design proved useful. The greatest demonstration of UNIVAC’s capabilities, as well as the event that brought UNIVAC to the public’s attention, was the use of the computer to predict the results of the 1952 presidential election. UNIVAC correctly predicted Eisenhower’s win, and with the press coverage that followed, UNIVAC soon became a well-known term in American pop culture. UNIVAC’s list of firsts continued when a version of the computer was sold to General Electric Appliance Park for use in calculating payroll. This was the first non-government, private-sector purchase of a computer, initiating the professional, commercial computer industry.10

Computerizing America Though many younger Americans might know little about ENIAC or UNIVAC, the way in which these machines inspired the public imagination about computing is still relevant to computing in popular culture. For some time after UNIVAC was debuted, Americans used the term UNIVAC to refer to any kind of computer. A more generalized term, “brainiac,” was then adopted, meshing the names UNIVAC and ENIAC with the popular conception of these machines as “electronic brains.” It would be some time until scientists began working on robotics as an independent field of engineering, but there was a tendency among the public and in the popular press to call machines like UNIVAC “robots,” though they bore little resemblance to the concept of robots that had been popularized by authors like Isaac

Robotics_2.indd 70

10/2/2020 12:13:28 PM

The Computer Revolution | 71 Asimov. Terms like “robot,” or “electronic brain,” or “brainiac” were just entering the popular lexicon, each drawing on science and fiction in a struggle to describe this emerging technological field. In later years, generalized computational systems would be known simply as “computers.” Early computers like ENIAC, Whirlwind, and UNIVAC were not nearly as useful as investors initially hoped, but they were important stepping-stones toward consumer computer technology. It is unclear how long it might have taken the world to arrive at the innovations that made digital computing possible if not for World War II and the Cold War. From Colossus to UNIVAC, the computer had gone from a highly specialized cryptography tool to a commercial business product in only a few short years, and once ordinary citizens became interested in computers, there was no stopping the developmental train. From then on, computer engineering was a major academic, military, and commercial industry and incremental advancements over subsequent decades gradually brought computers into more arenas of everyday life in America and abroad.

CONCLUSION Early computers, sometimes called “robots” in the popular press, were extremely simple by modern standards. Children’s toys manufactured in 2020 have more computing power than ENIAC or UNIVAC, but these early steps were important and saw engineers working through a variety of technological problems essential to later computer design. Though computers were at first funded and developed for military purposes, computer science didn’t grow significantly until corporate and consumer applications became apparent. The spread of consumer computer technology demonstrates that funding appropriated for military purposes can sometimes lead to innovations in the consumer realm. Military funding continues to play an important role in computer science, as evidenced by the ubiquitous drone technology that the US military has adopted as the nation’s primary weapon in the age of indirect warfare. The massive computers of the first computer revolution did not possess intelligence, but many of the same technological breakthroughs were also being used to develop systems that suggested the future possibility of developing intelligent machines.

Robotics_2.indd 71

10/2/2020 12:13:32 PM

72 | ROBOTICS & ARTIFICIAL INTELLIGENCE

DISCUSSION QUESTIONS • Why was ENIAC a more useful machine than Colossus? • What advancements were present in the Whirlwind system as opposed to the ENIAC system? • Why were the early computer systems sometimes called “robots”? • How was military investment in the early computers a failure? Works Used Bellis, Mary. “The History of the UNIVAC Computer.” Thought Co., Dotdash Publishing, 5 Mar. 2019, www.thoughtco.com/the-history-of-the-univac-computer-1992590. Beyer, Kurt. Grace Hopper and the Invention of the Information Age. MIT P, 2012. Byford, Sam. “Colossus: How the First Programmable Electronic Computer Saved Countless Lives.” The Verge, Vox Media, 12 Mar. 2012, www.theverge.com/2012/3/12/2864068/colossusfirst-programmable-electronic-computer. “Calculator Is Fast as Light.” Laurel Outlook, 1 May 1946. Chronicling America: Historic American Newspapers. LOC., Library of Congress. chroniclingamerica.loc.gov/lccn/sn86075258/194605-01/ed-1/seq-9. Church, Zach. “Professor Emeritus Jay Forrester, Digital Computing and System Dynamics Pioneer, Dies at 98.” MIT News. 19 Nov. 2016, news.mit.edu/2016/professor-emeritus-jayforrester-digital-computing-system-dynamics-pioneer-dies-1119. “Colossus.” Crypto Museum. 24 Feb. 2018, www.cryptomuseum.com/crypto/colossus/index.htm. “Colossus—the Computer Built to Break Hitler’s Codes.” The Telegraph. 9 Dec. 2016, www. telegraph.co.uk/technology/connecting-britain/colossus-bletchley-computer-broke-hitlercodes/. “How High Is Up?” Evening Star. Washington DC, 8 Sept. 1946. Chronicling America: Historic American Newspapers. LOC, Library of Congress. chroniclingamerica.loc.gov/lccn/ sn83045462/1946-09-08/ed-1/seq-83/. Levy, Steven. “The Brief History of the ENIAC Computer.” Smithsonian. Nov. 2013, www. smithsonianmag.com/history/the-brief-history-of-the-eniac-computer-3889120/. “Whirlwind Computer.” History-Computer. 2020, history-computer.com/ModernComputer/ Electronic/Whirlwind.html.

Robotics_2.indd 72

10/2/2020 12:13:36 PM

APPROACHING INTELLIGENCE First Steps toward Modern Artificial Intelligence (1940s–1960s)

6

Introduction While it may have been fashionable to refer to early computers like ENIAC and UNIVAC as “brains,” these simplistic computational systems bore little resemblance to a brain. Beginning in the late 1940s, however, researchers began making their first tentative steps toward the kind of research that might one day lead to the creation of true artificial intelligence. These early experiments pale in comparison to the kind of work being done in the 2020s to explore artificial intelligence, but they represent crucial steps. Some of the research produced by these pioneering theorists continues to guide theorists working on problems related to neural programming, machine intelligence, and the science of complex systems dynamics. Topics Covered in this Chapter Include: • Artificial intelligence • Neural function • Massachusetts Institute of Technology (MIT) • Academic artificial intelligence programs

This Chapter Discusses the Following Source Document: O’Connell, Sanjida, “What the Tortoise Taught Us,” The Guardian, December 6, 2000

Robotics_2.indd 73

10/2/2020 12:13:40 PM

74 | ROBOTICS & ARTIFICIAL INTELLIGENCE

A

rtificial intelligence (AI) consists of a number of scientific goals in which the ultimate goal was envisioned many years before researchers possessed the rudimentary knowledge that would make artificial thought a possibility. Early computers were not designed to mimic brains or minds but were relatively simple calculating machines capable only of automating mathematical problems. The gulf from there to creating a machine that could truly think is immense, but a few early experimental programs led to the discovery of key pieces in this still-unsolved puzzle and paved the way to a more realistic science of creating artificial thought.

The Mathematics of Mind Before one can design a machine that thinks, one must first understand what thought is and how it works. One of the most important steps toward this goal was to break down the way that brains and thinking individuals work on a mathematical and logical level. In the 1940s, there was a small but enthusiastic group of neuroscientists around the world who were producing exciting discoveries into the form and function of the brain. One goal was to create mathematical models of brain function, and this ultimately led to the discovery of neural networks, which are computer systems that are modeled on the brain and neural system activity. To reach this goal, researchers needed a basic mathematical analysis of brain activity that could be translated into computational technology, which required understanding a fundamental problem in neuroscience and philosophy, known as the “mind-brain problem (MBP).” The mind-brain problem is as much a philosophical question as a biological one. The central issue is whether the mind and the brain are the same or different things. The “mind’ might be defined as the mental processes themselves, meaning thought and consciousness. This problem has its roots in ancient, pre-scientific ideas about human life. Religious and cultural systems were long founded around the idea that there is something nonphysical about human consciousness, or that there are some aspects of being and thought that are irreducible to the body. Consider, for instance, the concept of “souls” in Abrahamic religious tradition, a “divine spark” or “divine fire” that animates the body. Sages in the Abrahamic tradition argued that some aspect of human existence can live on after death. Many people around the world embrace a concept of human existence that involves a nonmaterial soul or consciousness separate from the physical realm. This does not mean that science should proceed from the assumption that the mind is a nonphysical entity separate from the physiological processes of the brain.1

Robotics_2.indd 74

10/2/2020 12:13:44 PM

Approaching Intelligence | 75 In modern neurosciences, some believe that the MBP was solved by a pair of little-known researchers named Warren S. McCulloch and Walter H. Pitts, who wrote a scientific paper in 1943 entitled “A Logical Calculus of the Ideas Imminent in Nervous Activity.” Pitts and McCulloch were inspired by the work of computational pioneer Alan Turing, and their seminal paper represents a novel effort to combine the emerging theory of computation to explain how neural mechanisms within the brain might be responsible for mental functions. McCulloch and Pitts achieved this by presenting a simplified model of neural function and then demonstrated that it was possible to map, onto this simplified model of neurons, the basic structure of an idea, what might be called the building block of a mind.2 At the time, McCulloch was working on a psychological theory of mental atoms, which he called “psychons,” and which he felt were analogous to atoms and/or genes. The model that Pitts and McCulloch created assumed that mental atoms or psychons contained propositional information. A proposition is a statement or belief that contains a judgment or opinion. Thus when Abraham Lincoln said, in his Gettysburg Address, that the nation had been “conceived in Liberty, and dedicated to the proposition that all men are created equal,”3 he was presenting an opinion, one that arises from a certain belief about the nature of humanity, equality, the value of men, and so on. The proposition that all men are equal is a complex idea, but it is possible to reduce it to simpler sets of propositions. One might be that any black male (BM) is functionally equivalent to any white male (WM) based on the proposition that one discrete thing, in this case an imagined male of African American or other brownskinned descent, is equivalent to any male of Caucasian descent for the purposes of determining political agency. A human considering whether all men are created, or created equal, or are treated equally, or deserve to be treated equally, must contend with a complex set of variables. But the basic premise is that BM is equal to WM in terms of political agency can be mirrored in computer programming. Once a person has embraced the proposition that all men are created equal, they can then use these accepted propositions to make inferences, which are conclusions based on a set of propositions, such as “man A is equal to man B.” McCulloch and Pitts were the first to show that inferences can be mathematically represented by computations, such that an inference might be seen as a mental calculation based on certain pieces of mental data (ideas) in the form of propositions. Though they did not discuss computers directly (as computers did not yet exist) their model, which involved a complicated system of “nets without circles,” was essentially a map of how an electronic circuit might function when making an inferential calculation.

Robotics_2.indd 75

10/2/2020 12:13:48 PM

76 | ROBOTICS & ARTIFICIAL INTELLIGENCE Pitts and McCulloch created the idea that the mind can be reproduced computationally, at least to mimic the activity that occurs among neurons within a computational system. Some philosophers embraced Pitts and McCulloch’s work as having provided an answer to the MBP by essentially showing that a model of the brain and a model of the mind were equivalent. In the years since, it has become clear that the function of the mind is more complex, but the researchers have been credited with initiating one major branch of research on intelligence, focused on the assumption that the function of the brain is calculable and can be translated into mechanical processes. This was an important leap forward in the study of artificial intelligence and led to the construction of the first modern “neural nets,” machines designed to mimic the brain in hopes of recreating the function of the mind.4

Different Kinds of Minds Humans are not the only species capable of thought; many animals must also utilize various kinds of thought to navigate their lives. In the past, scientists and philosophers imagined that animal and human minds were not at all similar. It was common to imagine that animals functioned by instinctual reaction rather than thought. Decades of research have demonstrated that many animals have rich inner lives and that they experience emotion and engage in thought and analysis of their world. This justifies the view put forward by the father of evolutionary biology, Charles Darwin, who argued that the minds of chimpanzees and humans were different in “degree” but not in “kind.” Darwin believed that chimpanzee minds may not be capable of the same kind of thought that human minds can generate, but chimpanzees are nevertheless endowed with a brain and mind capable of many of the same functions and sensations. Subsequent experiments have revealed a fascinating spectrum of thought and emotion within the animal kingdom, and now it is generally recognized that the minds of humans and animals are different, but not fundamentally so. Thought and mind are biological, and not human, properties.5 However, many animals have minds and brains that are significantly different from those of humans, making modeling animal behavior somewhat more approachable than modeling human behavior. In the late 1940s, pioneering researcher William Grey Walter created the first automated machine designed to mimic complex behavior by modeling his creation not after humanity but after tortoises. The name for Walter’s design was chosen because the machines he built were covered in a plastic shell that somewhat resembled a tortoise shell and not because the machine in any way behaved in ways similar to an actual tortoise. However, what Grey created was, in es-

Robotics_2.indd 76

10/2/2020 12:13:51 PM

Approaching Intelligence | 77 sence, the world’s first artificial animal, and his invention inspired the effort to mimic animal capabilities and thought through computer networks and mechanical engineering. Walter, known as “Grey” to friends and family, was born in Kansas City, Kansas, into an immigrant family from Britain. His father was editor of the Kansas City Star. Walter attended Westminster School and studied physiology at Cambridge University, writing a dissertation on “Conduction in Nerve and Muscle,” and became a pioneering researcher in the electrical study of the brain. His career kept him in his ancestral homeland of Britain, where, in 1939, he was appointed to head up the Burden Neurological Institute (BNI) in Bristol, a hospital for the treatment and study of mental health disorders. Over the course of his career, Walter authored more than 174 publications, mostly dealing with the electrical activity of the brain, and he was a founder of the EEG Society and worked to popularize and improve the use of EEG (electroencephalogram) technology in the study of the brain.

William Grey Walter created the first artificial animals, which he called tortoises, via Wikimedia.

The invention of Walter’s robotic tortoises is a strange footnote in his career. Apparently, Walter developed the machines in private, using a workshop in his house and assisted by his wife and fellow scientist Vivian Walter (née Dovey), his colleague for many years and also an expert in electroencephalography. Walter produced his now famous robots over the course of two years, the end result of a series of experiments involving inserting mechanical brains into children’s toys. When he revealed his creation to the press and the public in 1949, Walter did so with considerable theatrical flair. Demonstrating a sense of humor and a whimsical approach, Walter

Robotics_2.indd 77

10/2/2020 12:13:55 PM

78 | ROBOTICS & ARTIFICIAL INTELLIGENCE gave his creations a faux taxonomic classification, calling them Machina speculatrix. He famously said that Machina speculatrix was the “inevitable name of the species for the discerning, though ‘tortoise’ to the profane.” Walter created two prototypes of the “species,” one named Elmer and the other named Elsie, and each featured a slightly different design. Walter claimed that the toys possessed sights, hunger, touch, and memory. They were capable of moving independently around a room and could “feed” themselves by recharging by connecting to a light. Demonstrations of the robots, covered in the press, described them in decidedly biological terms, such as stating that the robots would “dance a jig” or “go to sleep when tired,” or that they could “give an electric shock when not feeling playful.” A small sign he made for the robot’s home featured the scientific name and told visitors “Please do not feed these machines.” In one press report, it was noted that Elmer and Elsie had only two brain cells, while the human brain has more than 10 million, but Walter argued that “most people get along with using as few as possible.”6 Word of Walter’s achievement reached American audiences largely through an article that Walter penned himself for Scientific American in 1950, “An Imitation of Life,” in which he explains how he took inspiration from the children’s story Alice in Wonderland, quoting from the book: “When we were little . . . we went to school in the sea. The master was an old turtle—we used to call him Tortoise.” “Why did you call him Tortoise if he wasn’t one?” Alice asked. “We called him Tortoise because he taught us,” said the Mock Turtle angrily. “Really you are very dull!”

Like the professorial tortoise of Lewis Carroll’s beloved novel, Walter felt that the creation of artificial animal life—robotics capable of making decisions like those made by animals navigating their environment—was a good way to study the human brain. As he argues at the end of his article, “Perhaps we flatter ourselves in thinking that man is the pinnacle of an estimable creation. Yet as our imitation of life becomes more faithful our veneration of its marvelous processes will not necessarily become less sincere.”7 Of course, Walter’s creations were not actually possessed of playfulness or feelings of fatigue, nor were they frustrated, happy, sad, or confused. The “behavior” demonstrated by

Robotics_2.indd 78

10/2/2020 12:13:59 PM

Approaching Intelligence | 79 the machines was the result of simple responses to specific stimuli, not the result of what might genuinely be called thoughts or feelings. But Walter’s achievement was a great leap forward in robotics, one that has perhaps not received the attention it deserves. In the years after Walter’s experiment, his achievement faded into relative obscurity, and his Machina speculatrix was typically only noted by those working on scientific history. In the twenty-first century there has been a revival of interest in Walter’s work and, in the following article discusses how robotics and cybernetics researchers were looking to glean fresh lessons from one of the pioneers of artificial life:

“WHAT THE TORTOISE TAUGHT US” by Sanjida O’Connell

The Guardian, December 6, 2000 Source Document A primitive robot has been brought back to life to inspire a new generation, reports Sanjida O’Connell.

In the 1940s Dr. William Grey Walter built some of the first artificial animals. More than half a century later, these first robots are providing the inspiration for a new race of mechanical creatures. Walter called his machines tortoises, after the tortoise in Alice in Wonderland. The mock turtle explains to Alice that “we went to school in the sea. The Master was an old turtle, we used to call him Tortoise.” Alice interrupts to ask why he called him tortoise if he wasn’t one. “We called him tortoise because he taught us,” replies the mock turtle angrily. Walter was one of the most distinguished neurophysiologists of his day. He developed these robot tortoises to help him understand how the brain functions. The first two, Elmer and Elsie, were cannibalised to create another six tortoises with the help of his technician, “Bunny” Warren. Two were destroyed in a house fire in Australia, others were turned into scrap metal, or lost

Robotics_2.indd 79

in transit between museums; fortunately one, possibly two, survived. Before his death, Walter passed a tortoise on to his son Nicholas, who kept it safe until Brunel University decided to repair it and use it for an exhibition in 1985. Some time later, Nicholas Walter discovered that it was about to be thrown out and rescued it; the last tortoise languished in his basement in Islington until 1995. Then Owen Holland, from the University of the West of England, restored it to Bristol where Walter lived and worked. Holland, senior technician Ian Horsfield, and Bunny Warren, restored the turtle to working order and have built replicas. What Walter’s tortoise did was travel towards a light. He called the creature Machina speculatrix because, “it explores its environment actively, persistently, systematically, as most animals do”. Most of the time the robot behaved as expected, but when put in front of a mirror, it began “flickering, twittering and jigging like a clumsy narcissus,” wrote Walter. He argued that this behaviour was evidence of self-awareness. Most scientists today do not believe robots can show self-awareness, but as Holland

10/2/2020 12:14:03 PM

80 | ROBOTICS & ARTIFICIAL INTELLIGENCE

“What the Tortoise Taught Us” continued says, “Walter wanted to prove that rich connections between a small number of brain cells produces very rich behaviour.” Few of Walter’s theories about neurophysiology and the brain are respected today, but his work on the artificial animals has taken on new life. Even more importantly, the tortoises helped scientists to realise that complex behaviour can be generated by using just a few components, and that biological principles can be applied to the field of robotics. These lessons have been taken to heart by the intelligent autonomous systems laboratory at the University of the West of England where a replica of Walter’s tortoise is housed in its original hutch. The first thing one sees when one walks into the robotics laboratory, is a large arena full of robots bumbling around, carrying out seemingly purposeless tasks. These U-bots, as they’re called, have no eyes, they can’t count, they don’t know who they are, and they’re not aware of anyone or anything else, yet they manage to sort coloured plates into piles of the correct colour. It takes them hours, but they get there in the end. They are an example of an intelligent robot according to Dr Chris Melhuish, a director of the lab. They are “doing the right thing an adequate amount of times,” he says, which is his current definition of intelligence. Strangely, for the director of a robotics department, he doesn’t seem to know what a robot is. The definition he eventually settles on is that robots are “a collection of technologies, probably mobile, hopefully autonomous, capable of acting on the environment in a seemingly goal oriented way.” By trial and error, Melhuish and his team came up with four very simple rules which enabled the U-bots to carry out their task adequately.

Robotics_2.indd 80

According to him, the robots behave like ants. “Ants are a miracle of evolution and engineering,” he says. “An individual isn’t very sophisticated and isn’t capable of doing very much on its own, but put 20 million army ants together and watch out! They’re capable of work scheduling, seeking their own energy, building homes and looking after the brood. They carry out everything that goes on in a city, and yet no single ant has the blue-print for their workload.” In Antz, the movie, the queen constantly issued instructions, but this is not what happens in an ant colony. “Ant colonies are actually examples of a wonderful decentralised intelligence,” says Melhuish, “without consciousness and self-awareness. They are our inspiration.” The lab also houses a small group of robots; Jason Welsby, the PhD student who created them, was inspired, not by ants, but by birds. The robots themselves look neither like birds nor even vaguely like robots. Called Blimps, they’re small helium balloons with propellers and a battery slung beneath. They float in small silver flocks round the lab. Each one uses ultrasound to determine how far away the floor and the ceiling are. They keep track of each other with infrared transmitters and receivers and they follow three very simple rules: (1) Keep together; (2) Head in the same general direction and (3) Don’t stay too close to other individuals. It sounds like a motto for the office. “The hardware is not particularly useful because it’s very vulnerable. The robots can’t travel outside,” says Welsby, “but the software could be useful.” He envisages more robust robots being sent to disaster areas to sweep paths looking for survivors. He adds: “They could also be used in space exploration. You could send 50 together—it

10/2/2020 12:14:08 PM

Approaching Intelligence | 81

continued

doesn’t matter that some will die.” Animal behaviourist Professor David McFarland has recently joined the lab from Oxford University. He says: “My main aim is to make self-sufficient robots that do real things in the real world. They’ll get their energy from somewhere, they’ll make decisions outside of human control, and they’ll be able to travel in the ocean, in space, in nuclear reactors—all those places where people don’t want to go.” McFarland is working on an undersea robot project. These robots will shoal like fish: they will dive into the sea, communicate with each other, collect data on salinity, temperature, tidal currents, etc, and send their data back to researchers via satellites. He imagines they will get their energy from tidal currents. There is another way of making self-sufficient robots: these would actually ingest food just as animals do. For instance, the undersea robots might feed on plankton. Melhuish, McFarland and researcher Dr Ian Kelly are in the process of creating a Slugbot which will obtain its energy by eating slugs. This is the theory, but they admit progress is somewhat slow: the Slugbot has yet to take as much as a nibble out of a garden pest.

A little further down the line of munching robots is the Gastrobot, created by Dr Stuart Wilkinson from the University of South Florida, Tampa. His robots convert carbohydrates found in food into an electrical form. They do this by exploiting the metabolising power of micro-organisms, such as bacteria or yeast. “Bio-electro chemical processes are harnessed in a device called a microbial fuel cell containing selected microbes. Such a living battery can be made self-sustaining with just an input of food, water and air,” says Wilkinson. However, like a small horse, the researchers have to feed it sugar lumps. “What we’re interested in is a robot predator,” says Melhuish, “robots that go out and find their own energy like wild animals.” Eventually he and his team hope to create working flocks or shoals of robots, not particularly intelligent, but certainly “small, cheap, cheerful, replenishable, mass produced and homogenous”. One of his ideas is to develop a type of electrically activated polymer, a substance that would act almost like muscle. Melhuish envisages teams of worm-like robots, cleaning out escalators for instance. In the dark they would burrow about randomly eating grime. If someone shone a light into the escalator’s innards, they would move towards it where they could be relieved of their dirt, or thrown away while new creatures are added.8

Getting Serious About Computerized Minds Many of the questions that pioneers like Walter or mathematical theorists McCullough and Pitts tried to answer have still not been answered, and there is a passionate and very active debate surrounding how to model the function of biological minds in computational systems, as well as continuing disagreement as to whether or not such a goal is even possible. Work done on artificial intelligence in the 1940s and 1950s did, however, initiate various branches of research that continue to the present day, and their discoveries began interesting debates about the nature of thought, intelligence, and the ineffable qualities of mind.

Robotics_2.indd 81

10/2/2020 12:14:13 PM

82 | ROBOTICS & ARTIFICIAL INTELLIGENCE A decade after McCulloch and Pitts demonstrated their theory of mental calculus, another set of researchers, Herbert Alexander Simon, Allen Newell, and John Clifford Shaw, made another major leap forward. Simon, a future Nobel Prize winner in the field of economics, was a man of extraordinary accomplishments, publishing important work in the fields of political science, economics, sociology, and psychology. In the early 1950s, Simon was working at the Rand Corporation, a spinoff of the electric shaver manufacturer Remington Rand. The Rand Corporation became the most powerful and influential center for the study of robotics and artificial intelligence in the United States and, in the late 1950s, Simon and fellow Rand researchers Newell and Shaw began working together on a new program involving mechanical thought. The basic idea was to try to create a computer program that could prove mathematical theories, and they based their design on Bertrand Russell and Alfred Whitehead’s mathematical theory series Principia Mathematica. To create their groundbreaking program, Simon, Newell, and Shaw used a newly created computer language, information processing language (IPL), and ran the program on a computer housed at Rand’s California-based facility. The program proved that it is possible to

A diagram of Herbert Simon’s three stages of rational decision making. By MrunaltPatel, via Wikimedia.

Robotics_2.indd 82

10/2/2020 12:14:17 PM

Approaching Intelligence | 83 program a computer to make decisions based on “heuristics,” which is a way of solving problems based on choosing practical, rather than optimal solutions. For instance, there is a well-known example called the “travelling salesman problem (TSP),” in which a theoretical salesman is tasked with choosing the best path to visit a list of cities such that each city is visited before the salesman returns to the initial starting point. A heuristic solution might be for the salesman to visit the closest or easiest city to reach from the starting position, even if this makes it more difficult or even impossible to achieve the overall goal. Heuristic decision making, extremely important in computer search programs, is necessary in situations where all the relevant data is either not known or in which the data is so complex that it is impractical to spend the time needed to calculate the best, most precise, solution. A program may search by first attempting every possible search given initial parameters, but may then narrow the search once the results that begin to accumulate lack relevance, thereby leading to the best, most practical results. Simon and Newell developed a theory of heuristic searching based on training a program to choose only the most likely search directions at each branching point in a search tree (a model for how to sort through data). This provides a practical best solution, even if it is not the most exhaustive.9 Newell, Shaw, and Simon’s program, “Logical Theorist,” essentially proved the concept of heuristic problem solving in computational systems and initiated the ongoing field of heuristic programming. The program successfully provided proofs for 38 of the first 52 theorems in chapter 2 of Principia Mathematica. Author Bertrand Russell was apparently delighted to learn that a machine could provide mathematical proofs, but the general reception in the scientific community was somewhat muted. At the time, few understood the importance of the program. Simon, Newell, and Shaw presented their creation at the world’s very first conference on artificial intelligence, which had been organized by AI pioneers Marvin Minsky, John McCarthy, and Claude Shannon in 1956. This conference is not only notable as the first academic conference on the issue, but also as the origin of the now widely used term “artificial intelligence,” credited to McCarthy. Simon reported that Logic Theorist got a somewhat frosty reception at the 1956 conference, which he and his colleagues wrote off as professional jealousy: “[AI research] was going off into different directions. They didn’t want to hear from us, and we sure didn’t want to hear from them; we had something to show them! . . . In a way, it was ironic because we already had done the first example of

Robotics_2.indd 83

10/2/2020 12:14:21 PM

84 | ROBOTICS & ARTIFICIAL INTELLIGENCE what they were after, and second, they didn’t pay much attention to it. But that’s not unusual. The ‘Not Invented Here’ sign is up almost everywhere, you know.”10 Whether or not it was acknowledged at the time, Logic Theorist was a discovery of major importance. In 1959, just three years after that first conference, Simon and Newell’s rivals, Minsky and McCarthy, were given permission to establish the world’s first dedicated academic department of artificial intelligence science at the Massachusetts Institution of Technology (MIT). Much of the early work that was completed there was inspired by, or took direction from, Newell, Simon, and Shaw’s experiment in heuristic programming design. From the first experimental theories on the logical calculus of the mind, to the development of the first robots demonstrating the function of primitive thought processes, through the first computer program able to solve problems utilizing the humanistic heuristic approach, the science of artificial intelligence came into its own during the 1940s and 1950s and by 1960 was already a recognized, though still somewhat controversial and radical, field of mathematical and philosophical scholarship. In the twenty-first century, the theories of these early researchers have been supplanted by more detailed research, and the field has diversified into numerous subfields. Although AI research has come a long way since McCulloch and Pitts’s logical calculus, philosophers and historians have continued to return to these foundational works both to glean new information as well as to track and explore the history of AI science.

CONCLUSION In comparison to the kind of research being conducted in the 2020s, the earliest forays into machine intelligence were quite simplistic. More than anything, these experiments demonstrated the kind of intelligent programming that might one day be possible for machine design, and many early ideas and theories were later borne out in practical research and production. Though machines with true thinking abilities were still a long way off in the 1960s, another era of robotics and engineering was dawning as the first robots capable of performing industrial production tasks became available. Though not thinking machines, this was another important step toward modern robotics, in which “thinking” through processing has been combined with physical, mechanical capabilities, resulting in machines capable not only of performing complex and specific tasks, but also of making practical decisions.

Robotics_2.indd 84

10/2/2020 12:14:25 PM

Approaching Intelligence | 85

DISCUSSION QUESTIONS • What do the robotic “tortoises” Elmer and Elsie indicate about the purpose of robotics engineering? Are these inventions still important? • How did professional jealously impact the reception of the Logic Theorist program? Does professional competition limit the spread of new technologies? • If science were to create a perfect mechanical reproduction of the brain, would this also mean that science could recreate the mind? Why or why not? • How are minds and brains different? Explain your answer. Works Used “Animals Think, Therefore . . .” The Economist. 12 Dec. 2015, www.economist.com/news/ essays/21676961-inner-lives-animals-are-hard-study-there-evidence-they-may-be-lot-richerscience-once-thought. Boden, Margaret A. Mind as Machine: A History of Cognitive Science. Clarendon P, 2006. Holland, Owen. “The First Biologically Inspired Robots.” Robotica, vol. 21, no. 4, 2003, pp. 351– 63, Cambridge Core, doi: 10.1017/S0263574703004971. Lincoln, Abraham. “The Gettysburg Address.” Abraham Lincoln Online. 2020. www. abrahamlincolnonline.org/lincoln/speeches/gettysburg.htm. “Logic Theorist.” History-Computer. history-computer.com/ModernComputer/Software/ LogicTheorist.html. McCulloch, Warren S., and Walter Pitts. “A Logical Calculus of the Ideas Immanent In Nervous Activity.” Bulletin of Mathematical Biology, vol. 5, pp. 115-33, 1943, Springer Nature, doi. org/10.1007/BF02478259. O’Connell, Sanjida. “What the Tortoise Taught Us.” The Guardian. 6 Dec. 2000, www.theguardian. com/science/2000/dec/07/robots. Piccinini, Gualtiero. “The First Computational Theory of Mind and Brain: A Close Look at McCulloch and Pitts’s ‘Logical Calculus of Ideas Immanent in Nervous Activity.’” Synthese, vol. 141, no. 2, 2004, Springer Nature, doi: 10.1023/B:SYNT.0000043018.52445.3e. Stefferud, Einar. “The Logic Theory Machine: A Model Heuristic Program.” Rand Corporation. 1963. history-computer.com/Library/Logic%20Theorist%20memorandum.pdf. Uttal, William R. Neural Theories of Mind: Why the Mind-Brain Problem May Never Be Solved.

Robotics_2.indd 85

10/2/2020 12:14:29 PM

86 | ROBOTICS & ARTIFICIAL INTELLIGENCE Taylor & Francis, 2005. Walter, W. Grey. “An Imitation of Life.” Scientific American. 1950, Computer Science, Texas A&M Univ.,robotics.cs.tamu.edu/dshell/cs643/papers/walter50imitation.pdf. White, Douglas J. “Heuristic Programming.” IMA Journal of Mathematics Applied in Business & Industry, vol. 2, no. 2, 1989, pp. 173-88, Oxford Academic, doi.org/10.1093/imaman/2.2.173.

Robotics_2.indd 86

10/2/2020 12:14:33 PM

FACTORY WORK Development of Industrial Manufacturing Machines (1913–1980s)

7

Introduction Robotics was at its most controversial in the 1970s and 1980s as advancements in robotics resulted in the first robots that were capable of performing factory labor more accurately and efficiently than human workers. Fear had been building since the first machine-driven production systems were developed, but it was not until the 1960s that robotics technology reached a stage in which practical use of robotics in factory settings became possible. As a result, America saw its first organized workers’ rights protests against automation and robotics. This began a struggle that continues to the present day, with more and more sectors of the US labor industry threatened by replacement and automation. Topics Covered in this Chapter Include: • Automation • Unemployment • Workers’ rights movement • American Communism and Marxism • Lyndon B. Johnson

This Chapter Discusses the Following Source Document: “The Triple Revolution,” International Socialist Review, Summer 1964

Robotics_2.indd 87

10/2/2020 12:14:36 PM

88 | ROBOTICS & ARTIFICIAL INTELLIGENCE

T

he introduction of mechanical processes to factories initiated the first debates and the first legislation aimed at protecting the working class from automation and technological advancement. The first to use automated systems was the textile industry, but the fear of automation gradually spread to other industries as well. Beginning in the early twentieth century, developments increased the possibility for automation to be used in a variety of industries, but the true potential for automation was not realized until the late 1950s, when a series of inventions in the field of servomechanisms and small-scale motors resulted in the first practical factory robotics programs.

Moving Down the Line The first step toward large-scale manufacturing automation was not an automated system at all but a machine that pointed to the automated future: the moving assembly line. The first fully realized moving assembly line was introduced by the Ford Motor Company in 1913. Prior to this innovation, Ford factories were divided into a series of workstations where workers would work on various components of a single car until that car was finished. The assembly line changed all of this. Workers were stationed at a position along an assembly line where they would work on a single job such as threading nuts onto bolts or attaching springs to hubcaps. The assembly line was not embraced by workers.1 Records from the era indicate that workers felt they were losing their humanity in a rising tide of machines. When journalist Robert Cruden interviewed workers on an assembly line, one told him, “Am I bossed around? No, I don’t need to be. The machine I’m on goes at such a terrific speed that I can’t help stepping on it in order to keep up with the machine. It’s my boss.”2 The assembly line resulted in the rapid growth of the auto industry but also reduced the number of workers needed. Popular culture of the 1910s and 1920s reflects the angst of the modern factory as it came into existence. Charlie Chaplin’s 1936 Modern Times features Chaplin comically trying to keep up with his work on an assembly line, eventually being sucked into the machine and crushed by the factory’s gears. There was a very real sense that the mechanization of the world’s workforce was coming at the expense of their humanity.3 The 1927 Fritz Lang film Metropolis. told of a future in which wealthy business owners control the world from the comfort of immensely tall skyscrapers, while the workers struggle in underground factories managing the machines that make Metropolis run. The film reflects the dehumanization of the industrial age and the class exploitation.4

Robotics_2.indd 88

10/2/2020 12:14:40 PM

Factory Work | 89

Ford assembly line in 1913, via Wikimedia.

Robotics_2.indd 89

10/2/2020 12:14:44 PM

90 | ROBOTICS & ARTIFICIAL INTELLIGENCE

Robots Don’t Have Rights The first half of the twentieth century saw a struggle for workers’ rights that led to a resurgence of the American communist movement, long a powerful force in the workers’ rights movement. There were significant gains, but insufficient to make up for the losses experienced by workers in many different industries that declined due to automation and technological progress. In the 1960s the next great threat of automation began to emerge as the first functional manufacturing robots were created. The path from primarily human assembly line to primarily machine-driven assembly line began with the invention of servomotors and the introduction of computers into the industrial process. One of the key developments was the invention of Computer Aided Manufacturing (CAM), which was developed at MIT’s Servomechanisms Laboratory and was demonstrated in the MIT lab in 1959. This was the beginning of a revolution in industrial design. Over the course of the next decade, computer aided design systems would become standard, and many of the world’s large-scale manufacturing industries would switch over to computer-mediated systems. The shift disturbed the workforce but did create opportunities for a new wave of workers with technological experience. Computer-Aided Design (CAD) systems required knowledgeable computer engineers, and the machines that made their way into factories also required mechanics and engineers to maintain them. The major leap toward the modern mechanized factory came only a year after the introduction of CAM, with the 1960 debut of the world’s first factory robot, UNIMATE. UNIMATE was the brainchild of George Devol, a self-taught inventor inspired by Asimov and other authors of robot fiction. He used servo controllers to design a mechanical arm capable of precise work that previously would have required a human hand. The machine could be programmed to grasp, swivel, and lift. Devol was granted a patent for his device in 1954, naming it Universal Automation. Engineer Joseph F. Engelberger established Unimation Inc. in Danbury, Connecticut, in 1961, and UNIMATE was the result of their collaboration, the first robot with the capacity to replace human laborers. The first UNIMATE was purchased by General Motors and was put to work at a New Jersey factory. The machine cost $25,000 (the equivalent of about $210,000 in 2020) and weighed some 4000 pounds. As a single-task machine, the UNIMATE had limited use and was more a novelty and a test of the concept when it was installed, but the promise of the new machine was apparent from the beginning. At GM the first UNIMATE was programmed to lift and stack metal parts just after they were extracted from molds, a job that was dangerous

Robotics_2.indd 90

10/2/2020 12:14:48 PM

Factory Work | 91

Unimate PUMA 500 and PUMA 560 industrial robots, 1986. By UL Digital Library, via Wikimedia.

Robotics_2.indd 91

10/2/2020 12:14:52 PM

92 | ROBOTICS & ARTIFICIAL INTELLIGENCE for human workers. It was big news around the country, and Engelberger was even invited to appear on The Tonight Show, where he demonstrated one of the UNIMATE machines to host Johnny Carson in 1966.5 The workers’ rights community immediately saw the threat of UNIMATE and future robotic arms and launched a campaign against them. Chrysler and Ford were soon purchasing products from Unimation as well, and the company announced a new series of robots designed to take over other tasks, including welding, spray-painting, and applying adhesives. Before long, Japan proved the largest market for the new technology, and Japanese versions of UNIMATE were beginning work in Tokyo just a couple of years after the technology debuted. The machines were largely designed to replace laborers working on dangerous, potentially life-threatening tasks, but were later used more widely to decrease the cost of labor. A significant achievement for mechanical engineering and a major leap forward in robotics became a major blow to the workers’ rights effort. There was a perception that perhaps human laborers would eventually be obsolete, and around the country, labor activists urged politicians and the public to consider what this transformation would mean for the American people.

The Anti-Automation Movement In 1964, social scientists and activists delivered an open memorandum to US President Lyndon B. Johnson critiquing the advance of factory mechanization. Called the Ad Hoc Committee on the Triple Revolution, they argued that there were three separate revolutions underway: the “cybernation” revolution, which involved the increase in automation and loss of jobs; the “weaponry revolution,” the development of new weapons with the potential to bring about a global apocalypse; and the “human rights” revolution, involving the ongoing oppression of minorities. The Triple Revolution concept spread far in the mid-1960s. Dr. Martin Luther King, Jr. referenced the concept in his final speech before his assassination.6 The memorandum for the Triple Revolution was published in a 1964 issue of the International Socialist Review:

Robotics_2.indd 92

10/2/2020 12:14:58 PM

Factory Work | 93

KUKA IR 160/60 robots on an automobile assembly line. By Mixabest, via Wikimedia.

Robotics_2.indd 93

10/2/2020 12:15:02 PM

94 | ROBOTICS & ARTIFICIAL INTELLIGENCE

“THE TRIPLE REVOLUTION”

International Socialist Review, Summer 1964 Source Document Excerpt THIS statement is written in the recognition that mankind is at a historic conjuncture which demands a fundamental re-examination of existing values and institutions. At this time three separate and mutually reinforcing revolutions are taking place: •

THE CYBERNATION REVOLUTION: A new era of production has begun. Its principles of organization are as different from those of the industrial era as those of the industrial era were different from the agricultural. The cybernation revolution has been brought about by the combination of the computer and the automated selfregulating machine. This results in a system of almost unlimited productive capacity which requires progressively less human labor. Cybernation is already reorganizing the economic and social system to meet its own needs.



THE WEAPONRY REVOLUTION: New forms of weaponry have been developed which cannot win wars but which can obliterate civilization. We are recognizing only now that the great weapons have eliminated war as a method for resolving international conflicts. The ever-present threat of total destruction is tempered by the knowledge of the final futility of war. The need of a “warless world” is generally recognized, though achieving it will be a long and frustrating process.



THE HUMAN RIGHTS REVOLUTION: A universal demand for full human rights is now clearly evident. It continues to be demonstrated in the civil rights movement within the United States. But this is only

Robotics_2.indd 94

the local manifestation of a world-wide movement toward the establishment of social and political regimes in which every individual will feel valued and none will feel rejected on account of his race.

We are particularly concerned in this statement with the first of these revolutionary phenomena. This is not because we underestimate the significance of the other two. On the contrary, we affirm that it is the simultaneous occurrence and interaction of all three developments which make evident the necessity for radical alterations in attitude and policy. The adoption of just policies for coping with cybernation and for extending rights to all Americans is indispensable to the creation of an atmosphere in the United States in which the supreme issue, peace, can be reasonably debated and resolved. Interaction of the Three Revolutions

THE NEGRO claims, as a matter of simple justice, his full share in America’s economic and social life. He sees adequate employment opportunities as a chief means of attaining this goal: the March on Washington demanded freedom and jobs. The Negro’s claim to a job is not being met. Negroes are the hardest-hit of the many groups being exiled from the economy by cybernation. Negro unemployment rates cannot be expected to drop substantially. Promises of jobs are a cruel and dangerous hoax on hundreds of thousands of Negroes and whites alike who are especially vulnerable to cybernation because of age or inadequate education. The demand of the civil rights movement cannot be fulfilled within the present context of society.

10/2/2020 12:15:06 PM

Factory Work | 95

continued

The Negro is trying to enter a social community and a tradition of work-and-income which are in the process of vanishing even for the hitherto privileged white worker. Jobs are disappearing under the impact of highly efficient, progressively less costly machines. The United States operates on the thesis, set out in the Employment Act of 1946, that every person will be able to obtain a job if he wishes to do so and that this job will provide him with resources adequate to live and maintain a family decently. Thus job-holding is the general mechanism through which economic resources are distributed. Those without work have access only to a minimal income, hardly sufficient to provide the necessities of life, and enabling those receiving it to function as only “minimum consumers.” As a result, the goods and services which are needed by these crippled consumers, and which they would buy if they could, are not produced. This in turn deprives other workers of jobs, thus reducing their incomes and consumption. Present excessive levels of unemployment would be multiplied several times if military and space expenditures did not continue to absorb 10% of the Gross National Product (i.e., the total goods and services produced). Some 6-8 million people are employed as a direct result of purchases for space and military activities. At least an equal number hold their jobs as an indirect result of military or space expenditures. In recent years, the military and space budgets have absorbed a rising proportion of national production and formed a strong support for the economy. However, these expenditures are coming in for more and more criticism, at least partially in recognition of the fact that nuclear weapons have eliminated war as an acceptable method for resolving international conflicts. Early in

Robotics_2.indd 95

1964 President Johnson ordered a curtailment of certain military expenditures. Defense Secretary McNamara is closing shipyards, airfields, and army bases, and Congress is pressing the National Space Administration to economize. The future of these strong props to the economy is not as clear today as it was even a year ago. The Nature of the Cybernation Revolution

CYBERNATION is manifesting the characteristics of a revolution in production. These include the development of radically different techniques and the subsequent appearance of novel principles of the organization of production; a basic reordering of man’s relationship to his environment; and a dramatic increase in total available and potential energy. The major difference between the agricultural, industrial and cybernation revolutions is the speed at which they developed. The agricultural revolution began several thousand years ago in the Middle East. Centuries passed in the shift from a subsistence base of hunting and food-gathering to settled agriculture. In contrast, it has been less than 200 years since the emergence of the industrial revolution, and direct and accurate knowledge of the new productive techniques has reached most of mankind. This swift dissemination of information is generally held to be the main factor leading to widespread industrialization. While the major aspects of the cybernation revolution are for the moment restricted to the United States, its effects are observable almost at once throughout the industrial world and large parts of the non-industrial world. Observation is rapidly followed by analysis and criticism. The problems posed by the cybernation revolution are part of a new era in the history of all mankind but they are first being faced by the people of the United

10/2/2020 12:15:11 PM

96 | ROBOTICS & ARTIFICIAL INTELLIGENCE

“The Triple Revolution” continued States. The way Americans cope with cybernation will influence the course of this phenomenon everywhere. This country is the stage on which the Machines-and-Man drama will first be played for the world to witness. The fundamental problem posed by the cybernation revolution in the United States is that it invalidates the general mechanism so far employed to undergird people’s rights as consumers. Up to this time economic resources have been distributed on the basis of contributions to production, with machines and men competing for employment on somewhat equal terms. In the developing cybernated system, potentially unlimited output

can be achieved by systems of machines which will require little cooperation from human beings. As machines take over production from men, they absorb an increasing proportion of resources while the men who are displaced become dependent on minimal and unrelated government measures— unemployment insurance, social security, welfare payments. These measures are less and less able to disguise a historic paradox: that a growing proportion of the population is subsisting on minimal incomes, often below the poverty line, at a time when sufficient productive potential is available to supply the needs of everyone in the United States.7

US productivity and average real earnings 1947–2008. By US Department of Labor, Bureau of Labor Statistics, via Wikimedia.

Robotics_2.indd 96

10/2/2020 12:15:15 PM

Factory Work | 97

A Continuing Evolution It is possible to imagine a society in which technological progress can occur without disadvantaging workers. Politicians, financial institutions, and educational institutions would work together to help citizens adjust to the technological transformation of their world. Though the laborers of the world are necessary to the economic and social advancement of society, the evolution of labor has often disadvantaged laborers. During the Industrial Revolution, society’s leaders failed to protect the laboring class from corporate exploitation, and many were unable to earn enough to advance themselves or their families. The workers’ rights movement gradually won victories that forced corporations to reduce work schedules, provide insurance, and increase wages, but workers lost more than they won over decades of lobbying for better treatment.

CONCLUSION By prioritizing efficiency over the welfare of workers, pioneering auto companies proved that it was possible to complete their goals with fewer human employees, eliminating the need for critical thinking and the potential for advancement for many workers in the industry. There are certain tasks or jobs so dangerous or unpleasant for human workers that it might be prudent or even preferable to create automated substitutes, but the profit-driven replacement of humans with machines has deepened unemployment, weakened economic mobility, and serves as one modern representation of the ancient struggle for workers’ rights and welfare.

Robotics_2.indd 97

10/2/2020 12:15:20 PM

98 | ROBOTICS & ARTIFICIAL INTELLIGENCE

DISCUSSION QUESTIONS • How was the introduction of the mobile assembly line dehumanizing? How was this reflected in Charlie Chaplin’s film Modern Times? • Should governments and world leaders do more to protect workers from being replaced by machines? Why or why not? • What are some careers that might be replaced by machines in the near future? • Should the US federal government do more to help workers in the coal industry transition to other jobs? Why or why not? Works Used Catalan, Cristobal. “Metropolis at 90: You’ll Never See a Movie Like This Again.” FEE, Foundation for Economic Education, 4 Nov. 2017, fee.org/articles/metropolis-at-90-you-ll-never-see-amovie-like-this-again/. Gehring, Wes D. Film Clowns of the Depression. McFarland & Company, Inc. Publishers, 2007. Goss, Jennifer L. “Henry Ford and the Auto Assembly Line.” Thought Co., Dotdash Publishing, 23 Jan. 2020, www.thoughtco.com/henry-ford-and-the-assembly-line-1779201. King, Martin Luther, Jr. “Remaining Awake Through a Great Revolution.” Stanford. King Institute. kinginstitute.stanford.edu/king-papers/publications/knock-midnight-inspirationgreat-sermons-reverend-martin-luther-king-jr-10. Meyer, Stephen. Manhood on the Line: Working-Class Masculinities in the American Heartland. U of Illinois, 2016. Rosen, Rebecca J. “Unimate: The Story of George Devol and the First Robotic Arm.” The Atlantic, 16 Aug. 2011, www.theatlantic.com/technology/archive/2011/08/unimate-the-story-of-georgedevol-and-the-first-robotic-arm/243716/. “The Triple Revolution.” Marxists. International Socialist Review, vol. 24, no. 3, Summer 1964, pp. 85–89, www.marxists.org/history/etol/newspape/isr/vol25/no03/adhoc.html.

Robotics_2.indd 98

10/2/2020 12:15:24 PM

THE ILLUSION OF LANGUAGE Language Replicating Machines and Programs (1950s–1970s)

8

Introduction Language is central to human life and human culture, and the ability to create and program machines to engage with human language is one of the most essential steps toward the creation of true artificial intelligence. Beginning in the late 1950s, researchers made several important breakthroughs toward the goal of creating machine linguistics programs. This involved the creation of new algorithms, computer languages, and a series of fascinating machines that could mimic, while not truly engaging with, language. These programs, and the steps taken to create them, continue to inform researchers working on artificial intelligence systems and linguistic processing programs. Topics Covered in this Chapter Include: • Chatbots • Language processing • Computer programming • Computer languages

This Chapter Discusses the Following Source Document: “Parry Encounters the Doctor,” IETF, January 21, 1973

Robotics_2.indd 99

10/2/2020 12:15:28 PM

100 | ROBOTICS & ARTIFICIAL INTELLIGENCE

I

t has been argued that language is what makes humans human. In the broadest sense, language is a system built on the biological capabilities of bodies and minds. Creating computers that are capable of utilizing and engaging with language is one of the most important keys to artificial intelligence and to creating computerized systems that can interact with humans. In the twenty-first century, there are a variety of computer systems designed to replicate and engage with human speech or written language. The popular virtual helper “Siri,” designed by Apple Inc., for instance, can respond to verbal cues, and users can essentially ask Siri questions. Of course, when someone does this, they are actually entering key terms into a search program that uses algorithms to pull responses from a vast database. Siri does not understand language, but the program can identify and respond to certain key terms. This achievement is the end result of years of research aimed at designing machines that could hear, identify certain sounds, connect those sounds with meanings, and then utilize those meanings to perform actions. Machine language is currently in its infancy, but key developments between the late 1950s and the early 1970s pointed toward the potential for linguistic processing and the power of computers endowed with language to create the illusion of intelligence.

The Nature of Language Language is, on the most basic level, a cognitive tool that associates sounds and gestures with certain meanings or ideas. Ideas can then be conveyed simply by using those sounds or gestures. Written language adds another layer of complexity. By associating symbols with sounds or with ideas and concepts, written language enables individuals to share tremendous amounts of information, emotion, and other concepts across distances and across time. It is difficult to imagine human life without language, but it is certain that human civilization would not exist without it. Without the ability to share complex ideas, to preserve knowledge, and to revisit our ideas, human life would be much more limited in scope. Language is, in many ways, the key that unlocks the collective potential of the human mind and allows shared ideas to be put into action. Language is not unique to humans, and the rudiments of language can, it has been discovered, be encoded into genetically inherited instinctual patterns of behavior. To provide one example, the vervet monkey of Africa (Chlorocebus pygerythrus) has a rudimentary “language” in which certain kinds of vocalizations are associated with distinct features of their environment. Like humans, vervets live in extended family groups and depend on one another to find food and to survive in a dangerous environment. Vervets also have a psychological need for connection and intimacy.1

Robotics_2.indd 100

10/2/2020 12:15:32 PM

The Illusion of Language | 101 What interests linguistics researchers is that vervets have developed a complex system of alarm calls for use in communicating danger. Vervets use one kind of alarm call when one of them sees a predator that hunts on ground level, like a snake or a mongoose. When this call is heard the other vervets look down and start searching the ground, indicating that they understand that this particular vocalization signifies a ground predator. Another kind of alarm call is used when a potential danger is spotted in the trees or in the air. When this call is made, vervets begin scanning the skies, looking for hawks or eagles or any other airborne danger. Yet another call is used to signify the approach of a leopard or similar type of predator. When hearing this call, members of the troop begin searching the trees and ground, searching for signs of an approaching predator that might also climb trees.2 Scientists have discovered that the vocalizations made by vervets are not learned but instinctual parts of their behavior. Young vervets are born with the knowledge to identify threats and to associate those threats with different kinds of calls. However, it has also been discovered that the use of these calls is variable and adaptive. For instance, when researchers experimented by harassing vervets with flying drones, the troop quickly learned to use the alarm call used for aerial predators whenever they would hear the sound made by a drone, even before the drone was visible. This provides an example of how language is adaptive, enabling individuals to predict and analyze their environment more so than through observation and experience alone.3 Humans, like vervets, have an instinctual predisposition to develop linguistic associations, that is, to associate certain sounds with ideas and or concepts. Research has found that humans begin making these associations at around six months of age, when babies start to associate elements of their babble, or “baby talk,” with features of their environment. Although humans can make many different types of noises, the easiest to produce are the ones most used in the development of languages. Among these are the consonant sounds “P,” “D,” “M,” and “N.” These consonant sounds are then coupled with the easiest vowel sounds to produce, “ahhh” and “uhhh.” The result is that babies, around the world, often begin making sounds like “Mah,” “Pah,” “Nah,” and “Dah,” in infancy. Languages around the world adopted variations of the “Mah,” “Pah,” “Nah,” and “Dah” sounds to create the most important words for many infants, “mama” and “papa” or “dada” and “nana.” Similar phonemes are used in Asia and in the romance languages of the West, providing an example of how linguistic structure and innate sound-making potential were harnessed to associate sounds with meanings. Later, these sonically-associated meanings were associated with symbols, enabling the same ideas to be conveyed without sound and across distances.4

Robotics_2.indd 101

10/2/2020 12:15:36 PM

102 | ROBOTICS & ARTIFICIAL INTELLIGENCE

Computing Linguistic Principles Language is a complex phenomenon, containing many layers of meaning and symbolism and imbued with specific cultural, regional, and personal associations and emotions. But languages can also be examined in a more empirical manner. The sounds and symbols used in language can be broken down to constituent elements, and these elements can be simplified sufficiently to translate into computer coding and algorithms. One of the key discoveries in this vein was made by AI pioneer John McCarthy and allies at MIT’s artificial intelligence program. In 1958, McCarthy introduced a new programming language, LISP, which became one of the benchmarks for intelligent machine programming in the years that followed. Some have called LISP the best programming language ever created and one of the most powerful tools to realizing the learning potential of computers. Edsger Wybe Dijkstra, a prominent computer “hacker” and programming pioneer, once said that LISP was “the LISP program creator John McCarthy. Photo by “null0,” via Wikimedia. most intelligent way to misuse a computer.” McCarthy and fellow AI pioneer Marvin Minsky are most directly responsible for making LISP work. LISP was created as a way to encode mathematical notation for computers. The name of the language denotes its function, as a “LISt Processing” tool. The reasons for LISP’s facility as a tool for advanced computer language are complex but essentially come down to the way that LISP categorizes and utilizes data. Other languages can be embedded within the LISP language, providing capabilities beyond those of many other programming languages. From the very beginning, advanced computational theorists saw the potential for the language in AI research, and LISP was the key to the creation of the first serious artificial intelligence and language processing programs.5 The first of these was SHRDLU, a program famous in AI research circles and widely acknowledged to be the first successful attempt to model intelligence in a computer system. SHRDLU

Robotics_2.indd 102

10/2/2020 12:15:40 PM

The Illusion of Language | 103

Baby Names The name for the SHRDLU program has more meaning than its seemingly random string of letters would imply at first glance. It is based on the frequency with which each letter of the Roman alphabet appears in English words, which is calculated as ETAOINSHRDLU . . . etc. When designing the first typewriter keyboards, engineers chose an arrangement of keys that would make as much instinctual sense to the user as possible. In the twenty-first century, most keyboards are organized according to the “QWERTY” system, named for the letters that appear in the upper left of the keyboard. But the QWERTY system was not used for one of the first mass printing systems for newspapers and books, Linotype. Linotype machines function by printing strings of letters onto strips of metal. These metal strips then print letters onto a page. Linotype was typically used when printing newspapers, magazines, and posters in the 1970s and 1980s, before it was replaced by computerized typesetting systems (actually, it was first replaced by phototypesetting systems). The machine only produced one line of type at a time, hence the name “line o’ type.” Printing via linotype was a difficult process. Mistakes were difficult if not impossible to correct, and entire lines of type would be discarded. Linotype operators (known as linotypists) would fill in the rest of a line containing a mistake with nonsense so that proofreaders could easily identify and discard the strips before they were used to actually print magazine or newspaper pages. Linotype keyboards did not use the QWERTY system but were instead arranged into columns based on letter frequency. The first column from the left contained ETAOIN, while the second contained SHRDLU. When a linotypist made a mistake, he or she would typically fill the rest of the line with repeated letters using the easiest letters to type. Thus strings of “SHRDLUSHRDLUSHRDLU” would be typed into lines meant to be discarded. Proofreaders sometimes failed to locate nonsense lines, and lines of SHRD LUSHRDLUSHRDLUSHRDLU showed up in the middle of texts. MAD Magazine, a humor magazine that started publication in 1952, made fun of this common linotyping mistake by using SHRDLU to indicate when one of its comics characters was speaking nonsense. Future AI pioneer Winograd, a MAD Magazine reader in his youth, named his program SHRDLU in a humorous nod to this artifact of early printing history.a Works Used a. “How SHRDLU Got Its Name.” Stanford University, hci.stanford.edu/~winograd/shrdlu/name.html.

Robotics_2.indd 103

10/2/2020 12:15:44 PM

104 | ROBOTICS & ARTIFICIAL INTELLIGENCE began as a dissertation by Terry Winograd, then a postgraduate student at MIT’s artificial intelligence laboratory. Winograd’s goal was to create a program that would allow a human and a computer to interact through language. The LISP program made Winograd’s idea possible and, between 1968 and 1970, Winograd created the basic structure of his programming. Winograd created one of the first programs to process and “understand” natural language. SHRDLU may be simple by modern standards, but the program was a marvel when it debuted in 1970. It essentially placed the computer amid a simplified virtual environment, called the “block” or “blocks world.” This basic environment was filled with various geometric objects. Users could interact with the program using sentences in the English language. They could tell the program LISP machine at the MIT museum. to locate certain “blocks” within their environment and then to grasp or move those blocks into and out of a box. In the demonstration of SHRDLU, the following exchange between a person and a computer took place:

Robotics_2.indd 104

10/2/2020 12:15:50 PM

The Illusion of Language | 105

SHRDLU DEMONSTRATION Source Document Excerpt

Person: Pick up a big red block.

Computer: The blue pyramid and the blue block.

Computer: OK. (does it).

Person: What is the pyramid supported by?

Person: Grasp the pyramid.

Computer: The box.

Computer: I don’t understand which pyramid you mean.

Person: How many blocks are not in the box?

Person: Find a block which is taller than the one you are holding and put it into the box.

Computer: Four of them.

Computer: By ‘it’, I assume you mean the block which is taller than the one I am holding. Computer: OK (does it). Person: What does the box contain?

Person: Is at least one of them narrower than the one which I told you to pick up? Computer: Yes, the red cube. Person: Is it supported? Computer: Yes, by the table.6

Winograd’s accomplishment returned to the idea of a “Turing Test,” a hypothetical test for artificial intelligence based on the idea of simulated interaction. Turing argued that one of the criteria for a machine to be considered intelligent or “thinking” was that a human, when “conversing” with it, would be unable to tell that they were interacting with a machine and not another human. Winograd’s breakthrough proved that it was possible, using a limited number of words and phrases, to design a system that simulated a conversation. In reality, the SHRDLU program could only recognize key words, like “block,” “pyramid,” “box,” and “table” and a handful of adjectives and verbs. Winograd showed that, by only recognizing these key pieces of information, it was possible to create a program that appeared to understand and follow instructions from the user. Limiting interaction to relationships within a simplified virtual world of blocks, boxes, and tables made it possible for the program to determine the meaning of a question or request from the user without having a real understanding of language or the concepts that language conveys. In other words, SHRDLU was able to simulate conversational interaction without the need to program a computer that could actually understand conversation or language.

Robotics_2.indd 105

10/2/2020 12:15:53 PM

106 | ROBOTICS & ARTIFICIAL INTELLIGENCE

The Chatbot Paradox SHRDLU was not a true artificial intelligence and the program was not able to actually understand and evaluate human language, but it did a convincing job at simulating this process. The next major step in this evolution was ELIZA, a conversational computer program designed (tongue in cheek) to be the world’s first virtual therapist. The program was created by another MIT artificial intelligence lab member, Joseph Weizenbaum. Weizenbaum began his career at General Electric and helped design the first banking computers. At MIT, he took an interest in language processing, and it was there that he came up with the programming behind ELIZA. Weizenbaum’s ELIZA, named after the main character in the film “My Fair Lady, Eliza Doolittle,” was designed to simulate a conversation with a human operator. The operator was invited to ask questions, and ELIZA would answer using a set of stock responses. Scripts were created for the virtual ELIZA based on human language and psychotherapy terminology. Specifically, ELIZA was programmed to match the conversational patterns of a “rogerian psychotherapist,” an approach centered around conversational interactions in which the therapist avoids acting as an expert and instead provides motivating responses. This was most suited for the ELIZA program because it allowed the responses to be more generalized. Answers provided by ELIZA to patient questions or complaints were therefore more likely to appear realistic, or even human. For those who try conversations with ELIZA (still available through a number of websites), the exchange can quickly become nonsensical, but within a narrow range ELIZA’s responses might seem humanistic. ELIZA was capable of picking up on key terms, like “depression,” “alcoholism,” or “suicide,” and using those same words in responses, thus simulating comprehension. Like SHRDLU, ELIZA was able to pass the most basic version of the Turing Test in that many operators in experimental interactions believed they were typing in correspondence with a person. The fact that a simple program with a relatively limited and simplistic set of responses was able to deceive users has been a subject of fascination since, not just for those interested in computer language and programming but also for those studying human psychology. What makes conversations with ELIZA convincing to some users reflects characteristics of the human mind. ELIZA was the first of what have been called “chatterbots” or “chatbots,” automated programs that simulate speech or conversation. The next in this lineage of conversational computers was PARRY, a program written by Stanford University psychologist Kenneth Colby,

Robotics_2.indd 106

10/2/2020 12:15:55 PM

The Illusion of Language | 107 who was inspired by the ELIZA program. Colby designed PARRY to simulate a patient suffering from paranoid schizophrenia, and the script contained ideation taken from psychiatric research. The program was far more complex than ELIZA, as the computer followed a set of behavioral rules based on three internal “states:” fear, anger, and mistrust. Based on input or questions from users, the weighted value of these three states increases and decreases. Thus, over the course of a “conversation,” PARRY would appear to become more fearful, angry, or mistrusting. Colby and his associates published the results of their experimental creation in Artificial Intelligence in 1971, describing it as a simulation of “artificial paranoia.” Colby and his partners were curious about how well PARRY would do in a Turing Test situation, so groups of practicing psychologists were given printouts of conversations involving both real schizophrenia patients and conversations between users and PARRY; they were then asked to identify which one was from the computer. Overall, the judges did not fare any better than the general public in identifying the fake “patient” from the real ones. According to Colby in a 1972 publication on the subject, psychiatric professionals selected the computerized patient 48 percent of the time, close to the success rate one might achieve through random guessing alone.7 With one machine designed to mimic a patient and the other designed to mimic a mental health professional, in many ways ELIZA and PARRY were “made” for one another. On a number of occasions, researchers hooked ELIZA and PARRY together to see what kinds of conversations might emerge from the strange meeting of machine “minds.” The results were frequently nonsensical, but have proven an interesting research subject in years since. The following is a text version of that now famous conversation between PARRY and his physician, ELIZA.

Robotics_2.indd 107

10/2/2020 12:15:59 PM

108 | ROBOTICS & ARTIFICIAL INTELLIGENCE

“PARRY ENCOUNTERS THE DOCTOR” IETF, January 21, 1973

Source Document Excerpt Network Working Group

V. Cerf

Request for Comments: 439 ERL NIC: 13771

*You’re entitled to your own opinion.

SU21 January 1973

PARRY Encounters the DOCTOR

The session recorded below actually happened on 18 September 1972 and is herewith reproduced for your entertainment.

What makes you think I am entitled to my own opinion?

*It bothers me just to be around people in general.

Please go on.

*I went to the track at Bay Meadows a while back. Good Evening. What does that suggest to you? Tell me your problems. Please terminate input with a period or a question mark.

*People get on my nerves sometimes.

I am not sure I understand you fully.

*You should pay more attention.

*I went to the races.

What else would you like to discuss?

*Horseracing interests me.

Why do you say that just now? *There’s a lot of crooked gambling going on there.

Suppose you should pay more attention.

Robotics_2.indd 108

10/2/2020 12:16:03 PM

The Illusion of Language | 109

continued I am not sure I understand you fully,

*It’s clear to me if not to you.

I am not sure I understand you fully.

*I’ve already told you enough about that.

Do you think its likely that not to I?

Oh? I enough about that?

*I went to the races.

*I went to the races.

Please go on.

comment: This has to be the most persistent and patient paranoid I have ever encountered.

*I gamble on horses. Please go on. What does that suggest to you? *Do you know anything about bookies? *I went to the races. What makes you think I know anything about bookies? What else would you like to discuss? *A bookie didn’t pay me off once. *Gambling has been nothing but trouble to me. What does that suggest to you? Why do you say that just now?

*I’ve had experience with crooks gambling at the track.

Robotics_2.indd 109

*I beat up the bookie who did not pay off, and I thought he might use his friends in the underworld to get even with me.

10/2/2020 12:16:08 PM

110 | ROBOTICS & ARTIFICIAL INTELLIGENCE

“Parry Encounters the Doctor” continued What else would you like to discuss?

*What do you know about racketeers?

I am not sure I understand you fully.

*You keep asking me the same things, why not change the topic?

Why do you ask? In what way? *What experience have you had gambling? *What do you mean? Why does that question interest you? Why are such questions on your mind? *I like horseraces. *I thought I told you, people get on my nerves. What else do you like? Your nerves? *I know the mob controls the big rackets. *Well, I don’t know. Who are you really talking about? Don’t you really know? *Have you ever really thought about the MAFIA? *I would say so. Suppose you ever really thought about the MAFIA. Please go on. *Let’s talk about something else.

Robotics_2.indd 110

10/2/2020 12:16:12 PM

The Illusion of Language | 111

continued *I would rather not discuss that anymore.

*I don’t understand your motives.

Would you like to discuss your nerves?

Why don’t you understand my motives?

*I guess so.

*Shouldn’t I be?

What does that suggest to you?

*What are you getting at?

What answer would please you most?

comment: ???

Why do you say that just now?

comment: I was wondering the same thing.

*It’s not a matter or like or dislike.

*That’s just the way it is.

What else would you like to discuss?

Suppose that’s just the way it were not.

*I am getting tired of answering the same questions over and over.

*So?

I am not sure I understand you fully. What resemblance do you see? *You harp on the same questions too much. *Why do you need to know that? What does that similarity suggest to you? What do you think?

Robotics_2.indd 111

10/2/2020 12:16:17 PM

112 | ROBOTICS & ARTIFICIAL INTELLIGENCE

“Parry Encounters the Doctor” continued *I don’t get you.

Please go on.

Do you wish that you did get I?

*In my opinion you have a one-track mind.

*It’s about time you showed some interest in my feelings.

What makes you think I have a one-track mind?

comment: you ready for this? Oh? I showed some interest in your feelings? *I used to go to the track every week. comment: must have been an accident... What does that suggest to you? *I will help you if I trust you. *I avoid racetracks now because the races are fixed. Do you wish that you trust I?

*Maybe yes, maybe no.

You don’t seem quite certain.

...Excuse me for just a minute. Sorry to have interrupt you please continue...

*Really?

Robotics_2.indd 112

Is that the real reason?

*I’d rather not discuss it.

What else would you like to discuss?

*That’s something I don’t want to go into.

Does that trouble you?

10/2/2020 12:16:21 PM

The Illusion of Language | 113

continued *I went to the races.

Who are you really talking about?

Why do you say that just now.

*The way you repeat yourself is getting ridiculous.

Suppose you repeat myself is getting ridiculous.

*It’s none of your business.

comment: howzat?

Why are you concerned over my business?

*There are some tricky operators down there.

*I don’t confide in strangers.

Are there really some tricky operators down there?

Don’t you really confide in strangers?

*The guys involved in the horseracing racket. Please go on.

*Let’s change the subject.

Perhaps that concerns your feelings.

*I went to the races.

I am not sure I understand you fully.

*What’s wrong with you that you keep on bugging me about this?

Robotics_2.indd 113

*You are a real nag.

comment: just one of the horses.

Does it please you to believe I am a real nag?

*I have had enough of this.

10/2/2020 12:16:26 PM

114 | ROBOTICS & ARTIFICIAL INTELLIGENCE

“Parry Encounters the Doctor” continued What does that suggest to you? “Goodbye.

PARRY was running at SAIL and DOCTOR at BBN Tenex, both being accessed from UCLA/SEX.

It’s been my pleasure, that’s $399.29 please. [This RFC was put into machine readable form for entry] comment: talk about tricky operators . . .

[into the online RFC archives by Helene Morin, Via Genie, 12/1999]8

Hooked together, it is obvious that the two machines are, well, machines, and not people at all. This experiment demonstrates that the human element is more important than the scripted responses of PARRY and ELIZA in making the conversation seem human. The level of intellectual adjustment made by human users make a conversation seem real and keep it from derailing into nonsense.

The Eliza Effect A strange thing happened to Weizenbaum after he created his now famous electrical therapist, ELIZA. In his experiments he noticed that human users were not only willing to buy the idea that ELIZA was a human doctor but were eager to believe that they were engaged with another person. With little prompting, individuals revealed intimate details about their lives. Weizenbaum’s initial amusement turned to concern, as he realized that a simple set of cues designed to mimic empathy was sufficient to convince a wide variety of people to let down their guards and to trust the computer completely. What Weizenbaum witnessed in his experiments is a reflection of a common psychological habit among humans, anthropomorphization, or the tendency to ascribe human characteristics and motivations to nonhuman animals or, in this case, computer programs. Though humans know that inanimate objects in their environment do not feel or have volition, human motivations and emotions are often, if only sarcastically, ascribed to them. One might describe their computer as being “mad at them” when it does not work or might describe another malfunctioning or broken item in their home as being “sad,” “depressed,” or “angry.”

Robotics_2.indd 114

10/2/2020 12:16:30 PM

The Illusion of Language | 115

A conversation with ELIZA, an early natural language processing program, via Wikimedia.

When experimenting with ELIZA, Weizenbaum realized that the technology he created could potentially be dangerous. Well before the ethics of AI was an established field, Weizenbaum began issuing warnings about his work. Because humans seemed so willing and even enthusiastic to ascribe life and intelligence to simple machines barely capable of imitating human reactions, Weizenbaum worried that chatbot technology could be used to mislead or hurt people, especially vulnerable individuals searching for connection. Since then, researchers have termed instances of technological anthropomorphization the “ELIZA effect,” in honor of both Weizenbaum’s machine and his role in attempting to alert the public and the AI community to this potential misuse of the technology they were creating.9 The world that Weizenbaum envisioned has become reality. Criminals now use automated telephone chatbots and similar text-based programs to try to steal money from individuals over the Internet and telephone systems. Chatbots, ELIZA and PARRY’s descendants, have become much more sophisticated and are able to carry out conversations in near real time

Robotics_2.indd 115

10/2/2020 12:16:35 PM

116 | ROBOTICS & ARTIFICIAL INTELLIGENCE and to use verbal language as well as written script. The chatbots of the twenty-first century more than pass the antiquated Turing Test, and many can now react even to non-sequiturs and other statements that would have quickly exposed older versions. Governments and agencies responsible for deciding how to manage the ethics of technological advancement have failed to protect the public from these threats. Though the misuse of computer technology and faux-intelligent systems continues, humanity has begun to learn from this experience. Individuals living in 2020 are better able to detect computers imitating humans, either through text, email, or over voice communication. That being said, as technology advances, and language emulating computers become more advanced, the danger of the ELIZA effect remains a major issue. Though he is one of the pioneers of AI, Weizenbaum is also remembered as one of its chief skeptics. He was one of the first respected computer professionals to call attention to the practical ways that AI could be misused. This was a far cry from the sensational visions of robot warlords that filled the pages of science fiction, but it proved a prescient warning that humanity is only now, in 2020, beginning to understand.

CONCLUSION The Eliza Effect helps to explain some of the fears that people express about robots and AI. Because it is difficult for humans to avoid imagining consciousness and volition, even when it is not there or even in cases where a thing’s behavior does not warrant it, it is easy for humans to imagine that future robots and thinking machines will be similar to humans, meaning not trustworthy, potentially violent, and motivated by selfishness. These attributes are not likely to become common in machines unless humans purposefully program them to behave in that way. As chatbots have become more common, especially in the realm of digital assistants, telephone systems, and scams, people have become more adept at recognizing the differences between simulations of conversation and actual conversation, and this may indicate changing criteria for attributions of intelligence. It is unknown, therefore, whether future generations, whose lives are suffused with semi-intelligent machines and imitations of intelligence, might be more resistant to the psychological predispositions that create the Eliza Effect.

Robotics_2.indd 116

10/2/2020 12:16:38 PM

The Illusion of Language | 117

DISCUSSION QUESTIONS • Why did Weizenbaum become skeptical of AI? Do you agree with his reasoning? Why or why not? • Should companies utilizing conversational emulators as receptionists or in cold-calling be required notify consumers that they are speaking with machines and not humans? Why or why not? • Do you trust your ability to tell a machine apart from a human in conversation? Why or why not? • How could conversational emulators be useful to humanity? Explain your answers. Works Used “The ELIZA Effect.” 99 Percent Invisible. 10 Dec. 2019, 99percentinvisible.org/episode/the-elizaeffect/. “How Language Developed: Comprehension Learning Precedes Vocal Production.” Science Daily. 27 May 2019, www.sciencedaily.com/releases/2019/05/190527111733.htm. McWhorter, John. “Why ‘Mom’ and ‘Dad’ Sound So Similar in So Many Languages.” The Atlantic. 12 Oct. 2015, www.theatlantic.com/international/archive/2015/10/words-mom-dad-similarlanguages/409810/. “PARRY Encounters the Doctor.” IETF. Network Working Group. 21 Jan. 1973, tools.ietf.org/rfc/ rfc439.txt. Price, Tabitha, et al. “Vervets Revisited: A Quantitative Analysis of Alarm Call Structure and Context Specificity.” Scientific Reports, vol. 5, 19 Aug. 2015, SpringerNature, doi.org/10.1038/ srep13220. Saygin, Ayse Pinar, Ilyas Cicekli, and Varol Akman. “Turing Test: 50 Years Later.” Minds and Machines, vol. 10, 2000, pp. 463–518, doi.org/10.1023/A:1011288000451. Seyfarth, Robert M., Dorothy L. Cheney, and Peter Marler. “Vervet Monkey Alarm Calls: Semantic Communication in a Free-Ranging Primate.” Science Direct, vol. 28, no. 4, Nov. 1980, pp. 1070–1094, www.sciencedirect.com/science/article/abs/pii/S0003347280800972. “SHRDLU.” Stanford University, hci.stanford.edu/~winograd/shrdlu/. Valencia, Sebastian. “The Lisp Approach to AI (Part 1).” Medium. 28 Feb. 2017, medium.com/aisociety/the-lisp-approach-to-ai-part-1-a48c7385a913.

Robotics_2.indd 117

10/2/2020 12:16:42 PM

Robotics_2.indd 118

10/2/2020 12:16:46 PM

THE PERFECT BODY Robot Design in Pop Culture and Reality (1950s–2010s)

9

Introduction The concept of robots changed tremendously over the early years of the Information Age, driven by new discoveries in computation and electronics. Science fiction writers and filmmakers adapted fictional visions of robotics to reflect emerging technological possibilities and, in doing so, stretched the limits of early concepts of machines. From the humanistic and arguably clunky robot designs of the 1950s to the proliferation of nonhumanistic droids and bots whose form was dictated by function, the vision of robotics changed over the course of the twentieth and twenty-first centuries, and this has changed human concepts about robots and their possible roles in human life and culture. Topics Covered in this Chapter Include: • Robots in film • Robot design • Human evolution

This Chapter Discusses the Following Source Document: Cortes, Jan, “‘Star Wars’ Droids and the Science Behind Why They’re Adorable,” Medical Daily, December 18, 2019

Robotics_2.indd 119

10/2/2020 12:16:50 PM

120 | ROBOTICS & ARTIFICIAL INTELLIGENCE

T

he look of the robot has evolved considerably since the first imagined robots appeared in literature. With no knowledge of mechanics or engineering, early imagined robots were often either designed to approximate the human form or to represent monsters and heroes of myth and fantasy. But would practical robots actually resemble humans? As computer technology emerged and spread through human society, the vision of robotics and artificial intelligence changed to incorporate emerging ideas about mechanics and engineering. Fiction writers depicting mechanical beings gradually moved away from the human model, realizing that robots might be designed for optimal function rather than in the quasi-religious mold, mechanized echoes of their creators.

The Human Machine Robby the Robot was probably the first science fiction icon of the twentieth century. A complex feat of special effects, engineering, and creative filmmaking, the now-iconic robot design was first featured in the 1956 film Forbidden Planet. Robby was a benevolent being guided by something akin to Asimov’s laws of robotics and programmed not to kill or harm humans. Interestingly, marketing for the film took a very different tactic. In posters and flyers, Robby was depicted carrying a scantily clad and apparently unconscious woman, an image meant to elicit protective impulses or perhaps salacious desires, but one that does not reflect Robby’s persona in the film. The suit for Robby was a breakthrough in special effects at the time. Pieces of rubber, glass, Plexiglas, and plastic, and the use of the emerging technique of vacuum molding, made Robby the biggest investment of Forbidden Planet’s producers, and the marketing and public relations teams knew that the robot would be one of the film’s biggest draws. The robot was such a key part of the film that his name, Robby the Robot, was featured in the film’s credits as one of the film’s stars. Before Robby, most robots that appeared in films were clunky, blocky designs. Special effects ad-

Robotics_2.indd 120

Original release poster for Forbidden Planet featuring Robby the Robot. By Loew’s International, via Wikimedia.

10/2/2020 12:16:54 PM

The Perfect Body | 121

Robby the Robot in the 1956 film Forbidden Planet, via Wikimedia.

vancements enabled the builders to endow the Robby suit with considerably more “life” and interest. Robby had movable arms, molded from flexible plastic, and he had three-fingered hands that could grasp. Most notable was the transparent dome head, through which one could see the mechanical brain. The brain itself was a hodgepodge of parts welded and glued together to look science fiction-y, but no attempt was made to give these parts any function. Some parts were moving, and the dome contained a lightbulb, which seemed to serve no functional purpose. The Robby look was at once mechanical and humanistic. Coupled with a voice provided in post-production dubbing by Marvin Miller, the filmmakers endowed Robby with something akin to a human personality. The robot showed compassion and even moments of levity and wit, though presented in a somewhat stilted, mechanical form. The Robby suit was so expensive and time-consuming to design that the studios decided it would be a waste for the robot to appear in only one film. After Forbidden Planet, Robby went on to appear in The Invisible Boy and then to make cameos in some of the most popular television series, like Lost in Space and The Addams Family. In 2017, the original Robby suit, by then an iconic piece of American film and science fiction history, was auctioned at Bonhams in New York, fetching $5,375,000, a movie prop record in the auction business.1 The Robby design, as innovative as it was in 1956, was clunky and unrealistic. It was clear that the actor inside the suit could barely move and the robot had a very limited range of mo-

Robotics_2.indd 121

10/2/2020 12:16:58 PM

122 | ROBOTICS & ARTIFICIAL INTELLIGENCE tion and was so top-heavy that the slightest breeze might topple him over. Later humanistic robots would significantly streamline this basic design. The gold-plated humanistic design used for C-3PO in the 1979 megahit Star Wars reflected the same view as that of Robby’s designers, with two arms, two legs, a single head, and an upright posture. Other designs might include a different number of fingers and toes (or none) or a transparent computerized array (as with Robby) instead of a human head, but the underlying idea was that robot design would mimic humanity. But, why would this be the case? There are basic reasons why so many imagined robots take on human-like form. First, this design preference reflects underlying beliefs about the nature and characteristics of the human body. Second, many writers and filmmakers likely chose humanistic designs because they imagined such a design would better enable mechanized beings to interact with humanity and to function in a world designed for human bodies.

Human Design Unconsciously or consciously, human designers imagine futuristic machines as humanistic because they think that humans are the apex of creation, reflecting the religious beliefs of the Abrahamic faiths. The belief that humanity is a divine creation promotes unrealistic and unsubstantiated beliefs about the human body, and these pervade other fields of human investigation. Is the human body really that great? The change in posture from quadrupedal (walking on four legs) and semi-quadrupedal life, as seen in humanity’s primate ancestors, to full bipedal (walking on two legs) posture must have provided substantial survival benefits to the ancestors of humanity. Generally speaking, it is only those traits that help animals survive or reproduce that are carried on to the next generation. However, evolution is not a system based on optimal design but on differential survival of individuals possessing sets of randomly mutated traits. Biological form and function never produces the absolute “best” design for a specific creature, it simply produces whatever design works. Walking upright allowed humans to evolve hands. The price of walking upright, however, is experienced by millions of humans around the world each year. Paleoanthropologist Bruce Latimer of Case Western University explains that the human spine initially evolved to be stiff to allow frequent tree climbing. Ancestors like the chimpanzee (Pan troglodytes) were semiarboreal. The transition from quadrupedal to bipedal involved rotating the spine by about 90 degrees, to make it vertical in relation to the pelvis. The final design is extremely precarious, and Latimer compares the human spine, which has

Robotics_2.indd 122

10/2/2020 12:17:02 PM

The Perfect Body | 123 26 vertebrae separated by disks, to a stack of 26 cups and plates, with a heavy thing (the head) balanced on top. The “s” shape of the spine is the result of trying to accommodate the head and the weight of the body on such a narrow structure, and the ultimate result is frequent back problems.2 The fact that humans use the pharynx, the throat, both for eating and for breathing is also not an optimal design. Thousands die each year from choking. For human women, the narrow birth canal seems to be another design blunder, resulting in much higher childbirth risks. Evolutionary biologists have again found that the birth canal is a result of humanity’s shift to bipedal life. The human brain, long seen as the ultimate symbol of humanity’s evolutionary excellence and advancement, also falls short of human needs in many situations. Different designs might have given humans greater capabilities for tasks that are important to human life. Animals have minds that are better attuned for sensory perception, memory, and other mental processes. It quickly became clear to individuals working on the problem of artificial intelligence that the optimal artificial mind would not really be human at all, it would be better than human.3 Anatomist Alice Roberts collaborated with artists and biologists to build a model of a human body, based ultimately on her own body, modified with optimized qualities. The resulting design featured tapered, catlike ears, much larger eyes, a marsupial’s pouch in the front of the body, and a much shorter, more robust spine. The design modifications included anatomical traits borrowed from many other species, such as the cat, the ostrich, the squid, and the kangaroo, and each modification addressed a weakness of the human body. The team also endowed their fictional humanoid with bird lungs, more links connecting the coronary arteries, and bigger tendons that could better absorb shock and would make the body more resistant to physical damage.4 The failings of the human body are the result of the fact that humans were not designed, but emerged from a long period of small, incremental changes. Robots, unlike humans, will be designed by intelligent minds and will not reflect or mirror human design.

Cute Little Bots At the same time the smash science fiction film Star Wars introduced American (and later global) audiences to the humanistic C-3PO, the film series also provided a very different version of robotic futures in the form of C-3PO’s longtime companion and comic foil, R2-D2. Unlike C-3PO, the completely non-humanistic “Artoo” was designed to look something like

Robotics_2.indd 123

10/2/2020 12:17:06 PM

124 | ROBOTICS & ARTIFICIAL INTELLIGENCE a rounded cylindrical waste bin. The robot moved along on three legs but was outfitted with a variety of other devices to handle various types of terrain or other physical challenges. It is revealed in later films that R2-D2 can also fly, and is equipped with thrusters. In terms of interacting with his environment, Artoo has compartments filled with a variety of extendable tools that include a torch, small grasping claws, various specialized technological devices and probes, and even a little circular saw. Compared to C-3PO’s humanistic design, Artoo was more like a combination of a tool box and some kind of children’s toy. R2-D2 represents a more realistic departure from older visions of robotics. Artoo was described as essentially a “navigator” and utility droid, and its form was designed to fit its function. Artoo’s generalized and adaptable set of tools and capabilities are more reflective of real functional robots that have been designed since. As robotics moved beyond the theoretical stage and into the realm of science and commercial engineering, many robots were designed using priorities similar to those represented by R2-D2, prioritizing adaptability over aesthetics. For instance, the robotic rovers that have been launched to explore the Martian landscape were built with treads for locomotion and multiple arms outfitted with various gadgets for sampling and evaluating minerals. The robots were designed to be good scientific explorers and experimenters. NASA’s Mars rovers drew a passionate fanbase back on Earth. Many followed their adventures out of fascination with the images of an alien world that the rovers sent back to Earth, but many were drawn to the Mars rovers for aesthetic reasons as well. It was not uncommon to hear fans refer to the designs as “cute,” much in the same way that fans of the Star Wars franchise once saw R2-D2 as cute and lovable. In this article from Medical Daily, Jan Cortes explains how Star Wars designers crafted their little droids to be not only functional, but also to tap into innate human emotions:

Robotics_2.indd 124

10/2/2020 12:17:10 PM

The Perfect Body | 125

“‘STAR WARS’ DROIDS AND THE SCIENCE BEHIND WHY THEY’RE ADORABLE” by Jan Cortes Medical Daily, April 30, 1939 Source Document Excerpt

The first time we saw him, or it, it was back in 2014 when the trailer for “The Force Awakens” was first shown, hurtling through the sandy landscape of Jaku as if escaping from something. The droid’s appearance was a pleasantly surprising one since its big central eye, circular body and tiny beeps and sounds make for an immediate and collective “awww” that barraged the internet after the trailer first dropped. Representing the new generation of droids for the franchise’s big screen comeback, Star Wars’ BB-8, much like the lovable R2-D2, was made to be loved. And love it we did. Where is all the love coming from? Plucky Droids from a Galaxy Far, Far Away

It’s simple, really. For one thing, a droid with personality is a science fiction trope that’s beloved by many, with a lot of people actually wanting it to be a reality. It’s why when the movies first came out nearly half a century ago, everyone was so enamored with C-3PO and R2-D2, what with them being two non-human drones that have their

Robotics_2.indd 125

own attitude and personality. BB-8’s circular and wide-eyed-like look are also what we find appealing and think of as cute. In fact, in the Star Wars universe, droids are very common and they’re part of everyday life, whether it’s to help with daily activities or even act like weapons during times of war. But besides their practical use, they also have feelings and care very much for their human counterparts. Sadly, that type of droid stays on the big screen because today’s technology isn’t capable of making them. Furthermore, robots in the real world are mostly for practical uses. Additionally, the idea of a robot having humanlike issues raises a lot of ethical questions. “I feel uncomfortable when people are rude to a digital assistant, like shouting at Siri or Alexa. When would a C-3PO need to have legal protections put in place? Is it a piece of property, or an entity that has a personality? It’s not alive, so it wouldn’t have same protections as human being, but that line gets gray,” Patrick Johnson, physicist and assistant teaching professor at Georgetown University, said.5

10/2/2020 12:17:14 PM

126 | ROBOTICS & ARTIFICIAL INTELLIGENCE Cortes raises interesting points about the relationship between design and human perception. The process of designing characters for film and television has provided a wealth of information on human psychology. Designers have shown that humans are geared to react to beings with certain features in a predictable way. These features typically activate evolutionary pathways set in place to ensure the protection of children, or design elements might activate empathic responses wired into the brain that foster a need for connection. Even the kinds of sounds and movements given to artificial beings can enhance or detract from their inherent appeal. This helps to explain the preference for “cute” robots, both in the realm of fiction and in reality. Designers working on robots for the toy market and for more serious applications continue to pay attention to aesthetic preferences and desires, and use these elements to make the resulting robot more appealing. Considering human appeal provides another justification for the idea that robots may be designed to resemble humans. In the Star Wars universe, “protocol” droids like C-3PO are designed to assist humans with diplomatic issues. They can understand and translate many languages and are programmed with extensive knowledge of customs and cultural practices from across the galaxy. The humans in Star Wars are, obviously, humanoid and many of the other species (most, in fact) are likewise more or less humanistic in appearance, even if they’re a different color or have reptilian faces or tentacles running off the top of their heads. It makes sense, in other words, for a machine built to help humans and humanoids to resemble a human in that such a machine would be better able to access the various devices designed for humanoid creatures. A similar justification might be used to explain the humanoid design of robots in the real world. If machines are designed to interact with humanity, some designers might feel that a humanistic shape and form would be more familiar and welcoming. The questions that must be asked when considering robotic design are: what will the robot be doing, and what, if any, advantages can be gained by different aesthetic choices.

Emotional Connectivity Beyond human preference and prejudices, the ultimate design that a robot might assume will likely depend on its function and role. One of the most unusual came with Christopher Nolan’s science fiction blockbuster Interstellar. In the film, viewers are introduced to two robots that look nothing like the mechanical beings typically seen in film and fiction. The simply named TARS and CASE looked like metal slabs, each of which had a small screen on one side and a name painted alongside it. Though they looked like solid metal slabs, the machines were joined at various points such that they could create grasping arms and legs

Robotics_2.indd 126

10/2/2020 12:17:18 PM

The Perfect Body | 127 and perform other tasks by pivoting and extending parts of their frame. The overall concept was of an adaptable machine, designed to be able to perform a wide variety of functions depending on circumstance. It was a very different view of robotics and artificial intelligence that defied early conventions and introduced the idea of unexpected form as connected to variable function.6 However, though TARS and CASE were not designed to look humanistic, the moviemakers behind Interstellar did imbue their slab-like robots with humanistic voices and personalities. In one scene, the main protagonist, played by Matthew McConaughey, is seen adjusting the personality of one of the robots to modify the robot’s “humor level.” The robot tells a series of jokes to illustrate various levels of “humor.” The humanistic personalities displayed by TARS and CASE are seen to have an impact on the astronauts accompanying the robots on their shared mission. McConaughey’s character, Cooper, treats the two machines as if they were not simply machines but companions and fellow astronauts, showing concern for their well-being and safety. Members of the audience, too, might find that they develop concern for TARS and CASE, as much as for the human members of the crew, and in large part this reflects the voices and personalities given to these robotic creations. TARS and CASE show that a robot does not need to be humanistic, or even ”cute,” to elicit an emotional response. Like experiments with chat bots and other disembodied “intelligences,” personality and the capability for interaction is enough to elicit feelings of affection, understanding, and empathy. Again, this is an example of humans seeing themselves, or some reflection of humanity, in the products of science fiction design. It is surprising how even disconnected hints of humanity are sufficient to make humans see robots, or objects, as humanistic. It is unclear what forms future robots might take. Science continues to produce new innovations that could guide future designs, especially when it comes to designing purpose-built machines. It is likely, for instance, that future robots built for space exploration might be outfitted with molecular bonding feet or legs, allowing them to stick to nearly any surface as they explore outer space. Future robots might be designed to appear and to act human, but such a creation would be a choice rather than a requirement. Humans may choose to design humanoid robots as a way to better understand human form, to create robots that can interact with and engage with humans on an emotional level, or to reflect their own aesthetic preferences. As with all other forms of human industry, technological design is a product of human psychology, culture, and imagination.

Robotics_2.indd 127

10/2/2020 12:17:22 PM

128 | ROBOTICS & ARTIFICIAL INTELLIGENCE

CONCLUSION Robotics is a field of human art and science and thus reflects many aspects of the human psyche. In both fictional and real versions of mechanical beings, artists and designers have endowed robots with humanistic qualities or with qualities chosen because of the way they appeal to humans. Over the years, depictions of robots in fiction have advanced, and designers have imagined new ways that nontraditional robot types might function, especially for purpose-driven tasks. In every manifestation, the designs that are used to create robots always draw from human experience and culture and thus can reflect humanity’s common prejudices and preconceptions as well as art and aesthetics in imagined futures.

DISCUSSION QUESTIONS • What are other examples of “cute” robots from fiction? What qualities made those robots loveable? • Should robotics and cybernetics be used to fix some of the vulnerabilities of the human body? Why or why not? • What are some qualities that artists and designers give to robots that are supposed to appear frightening or threatening? Can you give examples from film or literature? • What robot, if any, from science fiction have you felt particularly connected to? Explain your answer. Works Used Cortes, Jan. “‘Star Wars’ Droids and the Science Behind Why They’re Adorable.” Medical Daily. 18 Dec. 2019, www.medicaldaily.com/star-wars-droids-science-behind-why-adorable-447264. Dvorsky, George. “The Most Unfortunate Design Flaws in the Human Body.” Gizmodo. 7 Feb. 2014, io9.gizmodo.com/the-most-unfortunate-design-flaws-in-the-human-body-1518242787. Kehe, Jason. “How FX Wizards Brought Interstellar’s Strange Bots to Life.” Wired. 20 Nov. 2014, www.wired.com/2014/11/interstellar-droids/. Rowe, Chris. “The Top 10 Design Flaws in the Human Body.” PG. Pittsburgh Gazette. 27 Jan.

Robotics_2.indd 128

10/2/2020 12:17:26 PM

The Perfect Body | 129 2019, www.post-gazette.com/opinion/Op-Ed/2019/01/27/The-top-10-design-flaws-in-thehuman-body/stories/201901270040. Vinciguerra, Thomas. “Robby the Robot: From ‘Forbidden Planet’ to Auction Block.” New York Times. 10 Nov. 2017, www.nytimes.com/2017/11/10/arts/design/robby-the-robot-auctionbonhams.html. Weisberger, Mindy. “The ‘Perfect’ Human Body Is Not What You Think.” Live Science. Purch. 22 June 2018, www.livescience.com/62895-building-the-perfect-body.html.

Robotics_2.indd 129

10/2/2020 12:17:30 PM

Robotics_2.indd 130

10/2/2020 12:17:34 PM

SEASONS OF AI Development and Stagnation in AI Research (1950s–Present)

10

Introduction Artificial intelligence has been an ongoing field of activism and research since the late 1950s, but research on the topic has not been consistent. Science historians have, in fact, broken the history of artificial intelligence work into distinct periods, which are typically called seasons. During the “AI summers,” major breakthroughs occur, stimulating further research. During the “AI winters,” progress slows and research stagnates. As America and the rest of the world enter the 2020s, a number of prominent researchers in the field have opined that the world is now entering a new “AI Winter,” in which advancement will be limited. How and why this is occurring reflects the overlap between governmental and social priorities and science. Topics Covered in this Chapter Include: • Artificial intelligence • AI winter • AI summer

This Chapter Discusses the Following Source Document: Schuchmann, Sebastian, “Probability of an Approaching AI Winter,” Medium, August 17, 2019

Robotics_2.indd 131

10/2/2020 12:17:37 PM

132 | ROBOTICS & ARTIFICIAL INTELLIGENCE

T

he effort to create artificial intelligence is one of the most difficult endeavors in human history. This scientific, philosophical, and technological mission is nothing short of an attempt to understand the function of the human mind and the various processes that can be used to encode the phenomena known as “mind.” It is unsurprising that, even after decades of research, true artificial intelligence remains elusive, and opinions are divided on whether or not it can ever be achieved. Reaching the “next step” in AI research often proves difficult, and research in the field often stagnates while researchers struggle to make a breakthrough that will take AI research to the next level. This is one of the factors that creates surges of activity followed by periods of stagnation in the AI field, a phenomenon that researchers and followers of AI development have called “seasons” of AI research. Another factor influencing the apparent seasons in AI research is public interest. Over the years, polling organizations have detected significant public interest in AI and related technologies. In the United States, for instance, polling often reveals that Americans believe that the field is moving forward rapidly (much more rapidly than it actually is). In one of the most recent polls, conducted by Elon University in 2019, 63 percent of people said they thought AI would lead to a majority of people being “better off” by 2030, while 37 percent said that people will not be better off. Public opinion influences progress in AI because the public perception of research can influence whether or not politicians and other organizations invest in studies and research programs. Opinion polls also indicate that, while Americans certainly hope that AI will advance human life in the future, there are many unresolved issues of concern, such as the potential for AI to cause job losses and even the more unrealistic fear that AI will become a threat to humanity in an existential way, a fear generated more by fantasy and science fiction than by reality.1

However, public opinion is not one of the prime agents that furthers or curtails research in the field. More than public will, AI research thrives or withers based on the availability of funding, and funding priorities are often made on the basis of corporate or economic applicability. Thus, when powerful economic entities perceive a benefit, investment in AI research increases and politicians tend to be more receptive to providing grants for AI research. These are the factors that typically lead to a surge in AI research and also coincide with periods when AI will appear in the news and become a more prominent part of the pop culture discussion. Opinions about AI may form and change during these periods, as news items introduce readers to the potential positives and perils of AI development.

Robotics_2.indd 132

10/2/2020 12:17:41 PM

Seasons of AI | 133 Typically, after a surge in research pushed by new investment and the establishment of new programs, interest begins to decline, often because progress is slow and researchers need a breakthrough that may or may not be forthcoming. The search for profit is based on short-term goals and any economic decision or investment must be continually and convincingly justified in terms of marketable gains. Lacking this, investment declines and progress slows. Since research in AI began in earnest, in the early 1960s, there have been periods in which AI research surged forward and periods when progress stagnated, typically called “AI winters” by those in the field. The history of these periods can be informative with regard to the kinds of factors that influence progress and for those considering whether there is another AI winter on the horizon.

The First Winter

Arthur Samuel wrote a famous program that enabled a computer to play checkers, an early advance in the field of AI. By Xl2085, via Wikimedia.

More than any other single factor, the first AI winter began with the Lighthill Report, a review of AI research and academic progress published in the United Kingdom in 1973. However, understanding the impact of the Lighthill Report requires an understanding of how AI research had developed in the previous decades, and how this resulted in unrealistic expectations. Some earlier proponents of AI research made claims that were virtually impossible to justify. During the 1950s, advancements were made in machine learning and processing that seemed to indicate a substantial future for AI. These include a now famous program written in 1955 by Arthur Samuel that enabled a computer to play the game of checkers. The computer was even featured on a television program and made national news, where it was often hailed as proof that thinking machines were around the corner. Likewise, early experiments in language processing proved promising. For instance, in 1954 a computer program translated a 250-word dictionary from English to Russian.2 Successes like these inspired the creation of the US government’s Defense Advanced Research Projects Agency,

Robotics_2.indd 133

10/2/2020 12:17:45 PM

134 | ROBOTICS & ARTIFICIAL INTELLIGENCE or DARPA, which funded some of the biggest projects in AI of the era. It was not until 1956 that the term “artificial intelligence” was formally coined,3 but by this point research on the subject was already well underway. Some believed and professed that major achievements were within their grasp. Another perceptively important leap forward came in 1957, when American psychologist Frank Rosenblatt published the results of his research into the function of the brain, resulting in a new mathematical and technological model of a neuron, which he called a “perceptron.” Rosenblatt’s discovery was sufficient to cement his name in the annals of AI history and the Institute of Electrical and Electronics Engineers (IEEE) has since named one its most prestigious award in his honor.4 Perceptrons are still seen as one of the most important artificial intelligence breakthroughs and as one of the major steps toward building functional neural networks, but accounts of the discovery at the time were exaggerated. For instance, the New York Times treated the discovery as if Rosenblatt had brought the nation to within a few years of true artificial intelligence. Their 1958 article on the subject stated boldly, “The Navy revealed the embryo of an electronic computer today that it expects will be able to walk, talk, see, write, reproduce itself and be conscious of its existence.”5 By the end of the 1960s, millions had been invested in various AI projects, including machine learning, neural networks, machine translation, and so on. There were many breakthroughs, and many of those breakthroughs are still driving research in the field, but the progress was too incremental for the American military and politicians. One of the key figures in AI, John McCarthy, later commented on the growing disillusion with AI research by explaining that creating true machine intelligence proved to be a much more difficult task than he and many other pioneers in the field thought it would be. Skepticism of AI was growing in the United Kingdom as well, especially during the 1960s, with more prevalent social and political issues taking center stage. The British Science Research Council appointed Lighthill to conduct a survey of AI research, both in the United Kingdom and the United States, to determine if the millions spent on AI research could be justified by economic gains or any other significant advancement. Lighthill’s 1973 report was extensive, and its essential findings could be summarized as that investment had been substantial and results, less so. “In no part of the field have the discoveries made so far produced the major impact that was then promised.”6 The end result was that the British government withdrew support from all but two university-based research programs in artificial intelligence. Researchers were shocked and fought Lighthill’s statements in the press. A planned television debate between skeptics arguing

Robotics_2.indd 134

10/2/2020 12:17:49 PM

Seasons of AI | 135 AI progress with specialists in the United Kingdom never materialized. After the United Kingdom cut funding, a number of other European governments followed suit. In the United States the military, through DARPA and other programs, continued to fund AI research, but at a much lower level. Further, because AI successes had depended to a large degree on cross-cultural sharing, the decline in research in Europe also limited intercontinental sharing and cooperative projects.

And the Seasons, They Go Round and Round The first AI winter lasted from around 1973 to around 1980, but the gradual discoveries made by researchers in the United States and in Europe gradually began to recapture interest from major funding agencies and institutions. Over the course of the 1970s, humanity became familiar with computers, not James Lighthill’s 1973 report concluded that AI research just as bulky academic or commercial ma- was not living up to its potential, resulting in the British chines, but as increasingly practical, house- government’s withdrawal of support for AI research, via Wikimedia. hold tools. As interest in personal computing spread among the public, new corporations emerged. The 1980s was the beginning of the true Digital Age as computer technology spread through consumer markets, and the fascination with technology spawned a new generation of fiction writers and philosophers wondering where the fusion of human and machine would lead societies around the world. In American culture, the proliferation of technological fantasies in both literature and film spoke to the growing technological fascination that gripped America during the 1980s. Researchers in AI capitalized on this surge of public interest by promoting a series of key discoveries in AI that pointed to impending commercial applications. AI programs were thus funded and implemented in a number of fields. Machines described as “intelligent” were

Robotics_2.indd 135

10/2/2020 12:17:53 PM

136 | ROBOTICS & ARTIFICIAL INTELLIGENCE introduced into financial planning, medicine, geological exploration, and computer design, and there were again numerous media reports and testimonies from key researchers that overhyped the potential and progress in the field. This time, several key leaders in the field, like McCarthy, were more circumspect about the rise in public and institutional support. McCarthy argued that actual progress was limited, consisting mainly of incremental steps, and warned against promising too much and risking backlash by failing to deliver. Ironically, it was McCarthy and other researchers who brought about what is now called the “Second AI winter” by arguing that true machine intelligence was much further away than some claimed. By the 1990s, interest in AI was waning. A 1986 conference on AI attracted more than 6,000 visitors, while the same conference in 1991 attracted 2,000. New tech companies that had emerged on the scene in the 1980s hoping to capitalize on the AI craze went out of business, and others shifted their focus away from intelligent machines and toward more achievable goals.7 It was not until the mid-1990s, when the economy began to improve slightly, that heavy investment in advanced mechanical systems again became a priority.

The Current Boom By the mid-1990s, interest in AI had picked up again, as evinced by the proliferation of machine learning programs and other kinds of AI research, which expanded from the few central universities working on AI programs. Over the thirty years that have passed since the end of the last major AI winter, there have been periods of partial stagnation, during which AI research has slowed and historians have identified periods that might be described as “partial AI winters” or an “AI autumn,” but never again did interest or funding drop to levels seen during the 1970s, 1980s, and 1990s. One reason that interest and investment in AI has remained more steady is because progress has been steady. AI research has yet to produce true thinking machines of the type imagined in science fiction, or in the more optimistic predictions of the 1950s, but machine learning and other subfields of AI have been proven effective by significant discoveries and technological products that have appeared on the market. Systems like the “virtual assistants” that debuted in the 2010s on mobile telephones, for instance, though not really intelligent, continued to stimulate interest in the idea of AI and its potential. On a more general level, the degree to which humanity embraced and even became dependent on digital technology justifies continued investment in advanced technological discoveries, and this drives investment in both robotics and AI.

Robotics_2.indd 136

10/2/2020 12:17:57 PM

Seasons of AI | 137 However, there are some within the AI community who believe that a new AI winter may be coming. In some countries, AI research is threatened by growing skepticism over government and military spending; in others, concern about the ways that governments have used emerging digital technology has heightened skepticism about AI and other forms of advanced computational systems. Most Americans, for instance, do not trust that the government has regulated the Internet or mobile technology to protect user privacy. There is evidence that the US government has allowed access to the private data of American citizens shared and transmitted through social media and other digital networks. Government misuse of technology stimulates skepticism about government spending on technology and can cause public opinion to shift against investing in technological advancement. In addition, though machine learning and intelligence has come a long way since the mid-1990s, advancement has been more limited than proponents had hoped. In this article from Medium’s Towards Data Science, Sebastian Schuchmann discusses the current state of AI, recent advancements in the field, and whether or not another AI winter is likely to occur.

“PROBABILITY OF AN APPROACHING AI WINTER” by Sebastian Schuchmann Medium, August 17, 2019 Source Document

Motivation

Analysis: What Has Led to the AI Winters?

Both industries and governments alike have invested significantly in the AI field, with many AI-related startups established in the last 5 years. If another AI winter were to come about many people could lose their jobs, and many startups might have to shut down, as has happened before. Moreover, the economic difference between an approaching winter period or ongoing success is estimated to be at least tens of billions of dollars by 2025, according to McKinsey & Company.

For a detailed overview of both AI winters check out my first and second medium article on the topic.

This paper does not aim to discuss whether progress in AI is to be desired or not. Instead, the purpose of the discussions and results presented herein is to inform the reader of how likely progress in AI research is.

Robotics_2.indd 137

In this section, the central causes of the AI winters are extracted from the above discussion of previous winters. First, a recurring pattern that can be observed are that promises that kindled initial excitement, but which later turned out to be inflated have been the leading cause for the AI winters. For instance, government funding was cut during both AI winters after honest assessments of the results compared to the promises. The progress was overestimated because AI initially led to significant improvements

10/2/2020 12:18:01 PM

138 | ROBOTICS & ARTIFICIAL INTELLIGENCE

“Probability of an Approaching AI Winter” continued in various fields, very quickly. This suggested most of the work was done with only some minor problems to solve. However, as it later turned out these problems were not so minor in the first place. The Lighthill report, a primary contributor to the first AI winter stated: “in no part of the field have discoveries made so far produced the major impact that was then promised.” Similarly, the 1984 panel at AAAI expressed: “This unease is due to the worry that perhaps expectations about AI are too high [. . .].” Second, the cut in funding had a major impact on research in both AI winters. In the first AI winter, the Lighthill report led to a cut of funding for all but two universities in the U.K. and further led to cuts in Europe and the U.S. In the second AI winter, funding from DARPA was reduced. Moreover, the commercial failure of many AI-related startups in the late 1980s marked the second AI winter. Third, technological limitations, like the perceptron experienced in the 1960s, inhibited progress. The perceptron, which was first expected to soon “be conscious of its existence” could not solve the XOR-problem at that time. Similarly, limitations were faced with expert systems in the 1980s. They could not solve fundamental problems like vision or speech and lacked common sense. Consequently, in evaluating the likelihood of another AI winter, the following aspects should be examined closely: 1. 2. 3.

Robotics_2.indd 138

Expectations and promises compared to the actual results; Funding from governments and industries; Technological limitations.

Many technologies exhibit similar patterns to those mentioned above. To further narrow the focus, it is necessary to figure out how AI deviates from other technologies. Though similar in some regard, AI appears to be very susceptible to inflated estimations and technological limitations. Some reasons why AI differs from other technologies are: 1. Intelligence is highly multidimensional:

At some point, AI researchers believed that by solving chess, the riddle of intelligence would be solved. This turned out to be wrong, because intelligence involves more than the dimension of conscious, strategic thinking. Chess is only a tiny particle in the cosmos of intelligence. Researchers gave it such a central position because it turns out to be hard for humans, which leads to reason number two. 2. Moravec’s Paradox

Chess, which requires higher level thinking, is a very new skill in our evolutionary history, which might be the reasons why it is relatively difficult for humans and therefore associated with Intelligence. Vision, on the other hand, is old and mainly subconscious, which leads people to believe it is easy, but there is no reason to assume it is not as hard or even more difficult than higher level thinking. This is Moravec’s Paradox, and one can argue AI researchers have fallen prey to this statement by underestimating the processes we do subconsciously, like sensorimotor skills or common sense. 3. Hype and fear associated with achieving human-level intelligence

As I. Jordan pointed out, the hype and fear surrounding machines that are capable of achieving intelligence easily leads to exaggerations

10/2/2020 12:18:05 PM

Seasons of AI | 139

continued

and creates media attention less common in other fields. With these reasons in mind, the possibility of a coming AI winter can be analyzed with the appropriate framing. Probability of an Approaching AI Winter

Subsequently, the possibility of an upcoming AI winter is assessed. The current landscape of artificial intelligence and its public reception is studied. Furthermore, the present and the historical pre-winter times are compared regarding the key areas extracted beforehand. As a recap, these areas are: 1.

Expectations and promises compared to the actual results;

2.

Funding from governments and industries;

3.

Technological limitations.

Expectations and Promises

Many public figures are voicing claims that are reminiscent of those of early AI researchers in the 1950s. By doing this, the former group creates excitement for future progress, or hype. Kurzweil, for instance, is famous for not only predicting the singularity, a time when artificial superintelligence will be ubiquitous, will occur by 2045, but also that AI will exceed human intelligence by 2029. In a similar manner, Scott is predicting that “there is no reason and no way that a human mind can keep up with an artificial Intelligent machine by 2035.” Additionally, Ng views AI as the new electricity. Statements of this kind set high expectations for AI and spark hype. Consequently, the phenomenon of hype and how it relates to the present state of AI is investigated.

Robotics_2.indd 139

Hype and the Hype Cycle

A tool often used when looking at hype is Gartner’s Hype Cycle. It has practical applications that let us make predictions easily, but its validity is not scientifically established. First of all, it is not a tool developed for a scientific approach. It is stylized graph made for business decisions. That said, attempts to empirically validate the Hype Cycle for different technologies have been made. It can be concluded that the Hype Cycle exists, but that its specific patterns vary a lot. The key phases of the cycle are the peak, where interest and excitement are at their highest, and the trough of disillusionment, where the initial expectation cannot not be met. Here, interest in the field is at its lowest. Then, the field slowly recovers and reaches the plateau of productivity. As Menzies demonstrates, the Hype Cycle is well represented in the AAAI conference attendee numbers in the 1980s. First, the conference started with a rapid increase in ticket sales leading to a peak, and then those numbers quickly dropped down. Currently, conference attendee numbers for conferences like NIPS reach or even exceed the peak of AAAI in the 1980s, and they are quickly gaining in size. Similar patterns for interest in the field can be observed in venture capital funding for AI startups, job openings, and earning calls mentions. Researchers of hype point out that the quantity of coverage is important, but that it has to be supported by qualitative sentiments. Sentiment analysis in media articles shows that AI-related articles became 1.5 times more positive from 2016 to 2018. Especially in the period from January 2016 to July 2016, the sentiment shifted. This improvement could be correlated with the public release of Alpha Go in January 2016, and its victory against world champion Lee Sedol in March.

10/2/2020 12:18:10 PM

140 | ROBOTICS & ARTIFICIAL INTELLIGENCE

“Probability of an Approaching AI Winter” continued Following the trend of the Hype Cycle, this could lead to another trough of disillusionment with ticket sales, funding, and job openings quickly plummeting. However, AI is a very broad term describing many technologies. This further complicates the matter, as each technology under the umbrella term AI can have its own Hype Cycle and the interactions of Hype Cycles both with each other and to AI, in general, remain unclear. Going further, a more in-depth look into these claims is made, evaluating if the quick rise in AI interest is just the consequence of exaggerated promises or if the claims stand on firm ground. Comparison to Expert Opinion

Now, the statements and promises made by public figures are compared to a survey of leading AI researchers. In 2017, a survey of 352 machine learning researchers, who published at leading conferences, was conducted. This survey forecasts high-level machine intelligence to happen within 45 years at a 50% chance and at a 10% chance within the next nine years. However, full automation of labor was predicted much later, with a 50% probability for it to happen within the next 122 years. This study presents results far from the predictions of futurists like Kurzweil. Further, a meta-study on AI predictions has found some evidence that most predictions of high-level machine intelligence are around 20 years in the future no matter when the prediction is made. In essence, this points to the unreliability of future predictions of AI. Moreover, every prediction of high-level machine intelligence has to be viewed with a grain of salt. In summary, a Hype Cycle pattern is present

Robotics_2.indd 140

in the current AI landscape, leading to a potential decline in interest soon. Furthermore, optimistic predictions are made by public figures, but empirical evidence questions their validity. Nevertheless, statements like those from Ng, who views AI as the new electricity, refer more to the current state of the industry. Accordingly, the industry and governments funding is examined next. Investment and Funding

Funding has always had a significant role in AI research. As Hendler points out, cuts in government funding are only felt years later, since existing research programs continue. Thus, time passes until the lack of new research programs becomes evident. This means that a reduction in funding would need to be in place presently to be perceived in the years to come. In April 2018, EU members agreed to cooperate on AI research. A communication on AI was issued that dedicated 1.7 billion dollars of funding for AI research between 2018 and 2020. Then, in June 2018, the European Commission proposed the creation of the Digital Europe funding program with a focus in five key areas and total funding of 9.2 billion euros, of which 2.5 billion is dedicated to AI research. In March 2018, the U.S. Administration stated the goal of ensuring that the United States “remains the global leader in AI.” Later, in September 2018, DARPA announced a two billion dollar campaign to fund the next wave of AI technologies. In direct opposition, China has declared the goal of leading the world in AI by 2030. Consequently, several Chinese AI initiatives have been launched. These conflicting statements have motivated many to

10/2/2020 12:18:15 PM

Seasons of AI | 141

continued

adopt the term, “AI Race,” to refer to the battle of leadership in the AI realm between the U.S. and China. It is similar to the Space Race in the 20th century between the U.S. and the Soviet Union with the countries fighting for dominance in space travel. Back then, the race sparked much funding and research. Likewise, the “AI Race” mentality could make any reduction in funding unlikely in the next years. This is a strong point against an upcoming AI winter, as previous winters were accompanied by a decline of government funding. Another key point is the growing AI industry. Past AI researchers have been very reliant on government funding but, according to McKinsey & Company, non-tech companies spent between $26 billion and $39 billion on AI in 2016, and tech companies spent between $20 billion and $30 billion on AI.

As we have seen before, AI and machine learning have risen in popularity across many measurements. A few key events stand out in the shaping of the landscape. In 2012, a convolutional neural network won the ImageNet competition by a wide margin. This, combined with progress in object detection, changed the field of computer vision completely from handcrafted featureengineering to learned representations, thereby enabling autonomous cars to become viable in the foreseeable future. Similarly impressive results have been made in the natural-language understanding space. Deep learning has enabled all the popular voice assistants, from Alexa and Siri to Cortana.

The market forecasts for 2025, on the other hand, have an enormous variance ranging from $644 Million to $126 billion. This disparity demonstrates the economic difference between an upcoming AI winter and another period of prosperity.

Reinforcement learning with deep neural networks has had impressive results in gameplaying. In 2014, DeepMind used a deep-q-learner to solve 50 different Atari games, without changing the model’s architecture or hyperparameters. This flexibility in tasks was unprecedented, which led to them being acquired by Google soon after and subsequently leading the space of reinforcement learning with achievements like AlphaGo, and AlphaStar.

To summarize, government funding is very solid, and the “AI Race” mentality makes it likely that this situation will continue. Additionally, the industry is currently thriving. Market forecasts, however, diverge drastically.

Finally, in the last few years, generative adversarial networks (GAN) have achieved impressive results in generating images of e.g. human faces. In essence, deep learning has had groundbreaking results across many industries.

To determine which forecasts are more convincing, the progress AI has made in the last years is related to the criticisms of current approaches. Evaluating Progress

To view the criticisms of present AI techniques in the appropriate frame, the progress that has been made since 2012 until today (April 2019) is evaluated.

Robotics_2.indd 141

The Criticism of Deep Learning

In this chapter, criticisms of deep learning are discussed. As demonstrated, deep learning is at the forefront of progress in the field of AI, which is the reason a skeptical attitude towards the potential of deep learning is also criticism of the prospects of AI in general. It is a similar situation to the 1980s, when expert systems dominated the field and their collapse led to a winter period. If deep learning

10/2/2020 12:18:19 PM

142 | ROBOTICS & ARTIFICIAL INTELLIGENCE

“Probability of an Approaching AI Winter” continued methods face comparable technological obstacles as their historical counterpart, similar results can be expected. There are a few categories that have been identified in which most criticisms of deep learning fall: limitations of deep learning, brittleness, and lack of unsupervised learning. Limitations of Deep Learning “Today more people are working on deep learning than ever before—around two orders of magnitude more than in 2014. And the rate of progress as I see it is the slowest in 5 years. Time for something new.” Francois Chollet, creator of Keras on Twitter

As this quote is taken from Twitter, its validity is questionable, but it seems to fall in line with similar arguments he has made and it captures the general feeling well. In his book “Deep Learning with Python,” Chollet has a chapter dedicated to the limitations of deep learning, in which he writes: “It [deep learning] will not solve the more fundamental problem that deep learning models are very limited in what they can represent, and that most of the programs that one may wish to learn cannot be expressed as a continuous geometric morphing of a data manifold.” As a thought experiment, he proposes a huge data set containing source code labeled with a description of the program. He argues that a deep learning system would never be able to learn to program in this way, even with unlimited data, because tasks like these require reasoning and there is no learnable mapping from description to sourcecode. He further elaborates that adding more layers and data make it seem like these limitations are vanishing, but only superficially.

Robotics_2.indd 142

He argues that practitioners can easily fall into the trap of believing that models understand the task they undertake. However, when the models are presented with data that differs from data encountered in training data, they can fail in unexpected ways. He argues that these models don’t have an embodied experience of reality, and so they can’t make sense of their input. This is similar to arguments made in the 1980s by Dreyfus, who argued for the need of embodiment in AI. Unfortunately, a clear understanding of the role of embodiment in AI has not yet been achieved. In a similar manner, this points to fundamental problems not yet solved with deep learning approaches, namely reasoning and common sense. In short, Chollet warns deep learning practitioners about inflating the capabilities of deep learning, as fundamental problems remain. Deep Learning Is Brittle

The common terminology used to describe deep learning models is brittle. There are several examples of why such a description would be accurate, including adversarial attacks, lack of ability to generalize, and lack of data. A detailed discussion of these flaws and eventual prevention mechanisms is pursued. 1.

Adversarial attacks: It has been demonstrated that deep learning algorithms can be susceptible to attacks via adversarial examples. Adversarials use data modified in a non-recognizable way for humans to affect the behavior of deep learning models drastically. There are multiple methods for creating adversarial examples. In one technique, noise is added to the image by another learning algorithm in order

10/2/2020 12:18:24 PM

Seasons of AI | 143

continued to affect the classification, without being visible.

With this technique, it is possible for an image to be changed in such a way that a specified classification can be achieved, even if it is very different like from the original classification (like “panda” and “gibbon” which humans can easily distinguish). When the method of adversarial attack is known, it can be possible to defend against it by augmenting the training set with adversarial examples. To clarify, defending against a specific adversarial attack can be possible, but protecting against adversarials, in general, is hard. Nonetheless, there are successful methods recently developed that show promise in the issue. A formal way to defend against general adversarials has been used by bounding the output space of the model. Techniques like interval bound propagation have state-of-the-art accuracy in different popular image sets. Alcorn et al. point out that extreme misclassification also happens when familiar objects appear in strange poses. Examples like these reveal that deep learning models, understanding of objects can be quite naive. Furthermore, adversarial attacks demonstrate an underlying problem that is more profound—the lack of explainability. Due to the black box nature of deep learning models, predicting what the network is doing is hard. These adversarial attacks show that a model may have found an optimal way to classify an object in the training data, but it may still fail to capture the vastness of the real world. That said, there has been much work in improving the interpretability of models, mostly in the vision space through methods like semantic dictionaries, saliency maps, and activation atlases. These works represent attempts to gain insights into the hidden layers of deep learning models.

Robotics_2.indd 143

1.

Lack of ability to generalize: Furthermore, deep learning models have problems generalizing beyond the training data provided. Kansky et al. demonstrated that a model trained on the Atari game Breakout failed when small changes were made to the environment. For example, changing the height of the paddle slightly resulted in very poor performance of the agent. Similar criticism can be applied to any reinforcement learning system.

Cobbe compares the evaluation of reinforcement learning agents with supervised learning and concludes that evaluating an agent in the environment he trained in is like evaluating the performance of a supervised learner with the test set. The difference is that the first case is well accepted and practiced, and the second one is not tolerated in any sense. To solve this problem, Cobbe, as part of OpenAI, devised a benchmark for generalization to promote work in this area. Additionally, transfer learning in the domain of reinforcement learning has recently seen impressive results with OpenAI’s Dota agent. They announced that they were able to continue training on the agent despite substantial changes in rules and model size by using a transfer learning technique. Using similar methods, the lack of generalization in agents could be improved. 3. Lack of data: As “The Unreasonable Effectiveness of Data” demonstrates, data is essential in deep learning. Moreover, the rise in available data was one of the main contributors to the deep learning revolution. At the same time, not every field has access to vast amounts of data. That said, there are two ways to tackle this problem: by creating more data or by creating

10/2/2020 12:18:28 PM

144 | ROBOTICS & ARTIFICIAL INTELLIGENCE

“Probability of an Approaching AI Winter” continued algorithms that require less data. Lake et al. show that humans are able to learn visual concepts from just a few examples. Recent approaches in one-shot or few-shot learning, where an algorithm is only presented with one or a few data points e.g., one image of a given category, have made substantial improvements. At the same time, transfer learning approaches have improved immensely. By using a model pre-trained on a large data set as a basis, it is possible to significantly reduce training time on new data sets. To summarize, deep learning models are fairly described as brittle. That said, researchers are working on promising solutions to this problem. The Dominance of Supervised Learning

Most achievements realized by deep learning have been through supervised or reinforcement learning. However, as LeCun points out, humans mostly learn in an unsupervised manner by observing the environment. Additionally, approximate estimates predict that around 95 percent of data is unstructured. Moreover, labeling is a timeconsuming and expensive process, but labels only contain very little information about each data point. This is why LeCun believes the field has to shift more to unsupervised learning. A particular type of unsupervised learning, sometimes called self-supervised learning, has gained traction in the last couple of years. Selfsupervised learning procedures exploit some property of the training data to create a supervision signal. In a video clip, for example, all frames are sequential, and researchers exploit this property by letting the model predict the next frame of the clip, which can easily be evaluated because the truth is inherent in the data. Similar methods can be used

Robotics_2.indd 144

for text or audio signals. Additionally, different characteristics of the data can be used such as rotating an image and predicting the correct angle. The intuition is that in order to turn a rotated image back to its original form, a model needs to learn properties about the world that would also be useful in different tasks like object recognition. This proves to be correct, as this model can achieve great results in classification tasks via transfer learning. When looking at the first layer of the network, the filters are very similar to supervised models and even more varied. This criticism could be detrimental to deep learning and AI in general, if researchers dismissed it, but it seems not to be the case. OpenAI has presented some promising results, achieved by using unsupervised learning earlier this year, with the GPT-2 transformer language model. This model can generate very human-like texts by using a very large model and vasts amount of data from Reddit. They used a type of self-supervised learning by exploiting the sequentiality of text and letting the model predict the next word. Using the same architecture, MuseNet, a model that composes music, has been created recently. Unsupervised learning has the potential to solve significant obstacles in deep learning. Current research evidence suggests optimism regarding the progress of this learning technique. Conclusion

A complex interplay is present between AI researchers, companies, the technology and the perception of AI on many different levels. Therefore, making any prediction is hard. However, there are a few key things we can observe about the field that differ from historical pre-winter times.

10/2/2020 12:18:33 PM

Seasons of AI | 145

continued

In the past, reliance on government funding was very strong and the industry weak. That is far from the case today; many large companies like Google, Facebook, and Alibaba are investing more in AI technologies alone than the AI industry was worth during its boom times in the 1980s. Even more importantly, those companies did not only invest heavily in AI, but they also incorporated it heavily into their products. This provides the field a solid foot on the ground, even when public sentiment starts to shift. Similarly, stability is provided by the “AI Race” mentality, which reduces the risk of a decline in funding from governments. Equally important are the criticisms regarding deep learning and its limitations. Though most of it is valid, the evidence suggests that researchers are already working on solutions or are aware of the innate limitations of the technique. Furthermore, unsupervised learning, especially self-supervised learning presents promising opportunities by enabling the use of vasts amount of unlabeled data and by saving immense amounts of tedious labor.

That said, the expectations of the field are too high. Predictions about machines reaching human intelligence are unreliable. Furthermore, a Hype Cycle pattern can be exhibited in current conference attendee numbers, with the field growing quickly on many scales. As Hype Cycle patterns vary, no certain statements or predictions can be made. Finally, the historical perspective demonstrates the wave-like nature of the field. New technologies are being created every day; a vast amount of them die out; and some are being revived. In this light, it seems adequate to be prepared for current methods to die out, as well as to be on the lookout for some forgotten technology worth reviving. To summarize: The funding for further AI research appears stable at the moment. However, there are some technological limitations which may, coupled with very high expectations, lead to another AI winter. “People worry that computers will get too smart and take over the world, but the real problem is that they’re too stupid and they’ve already taken over the world.”7 Pedro Domingos

What Schuchmann did not know when writing his August 2019 article was that the Covid-19 virus was going to dramatically disrupt the global economy. The degree to which this will ultimately shift spending priorities remains unclear, but it seems likely that at least some research programs lacking in direct and immediate profitability or viability will be abandoned. The full consequences of the coronavirus will not be understood for some time, and it is likely that many large-scale social changes will greatly impact progress in many fields.

Robotics_2.indd 145

10/2/2020 12:18:37 PM

146 | ROBOTICS & ARTIFICIAL INTELLIGENCE

CONCLUSION The appearance of AI winters and AI bubbles, booms, or summers that follow them reflect differing levels of public interest in artificial intelligence and its potential to change human life. In general, Americans, like citizens around the world, tend to be more skeptical than supportive about AI. New discoveries can spark public interest in AI and this can lead to higher levels of corporate and governmental investment, thus ending an AI winter or leading to another AI boom. Another major factor that determines whether the United States will enter an AI winter is military funding. Military investment has driven many of the major discoveries in robotic and artificial intelligence. Discoveries that suggest future military uses therefore stimulate the growth of the AI field by encouraging more intensive military investment in AI research programs.

DISCUSSION QUESTIONS • Will AI benefit or harm humanity? Explain your answer. • How is military funding related to the perception of AI winters? Use examples from the text. • Why hasn’t AI technology reached levels envisioned by key figures in the field? Use examples from the text. • Given the other issues facing American citizens, should investment in AI be increased, decreased, or should it remain the same? Explain your answer. Works Used Anyoha, Rockwell. “The History of Artificial Intelligence.” SITN. Science in the News. 28 Aug 2017, sitn.hms.harvard.edu/flash/2017/history-artificial-intelligence/. Hutchins, John W. Early Years in Machine Translation: Memoirs and Biographies of Pioneers. John Benjamins Publishing Company, 2000. “Lighthill Report: Artificial Intelligence: A Paper Symposium.” Chilton Computing. 1973, pdfs. semanticscholar.org/b586/d050caa00a827fd2b318742dc80a304a3675.pdf.

Robotics_2.indd 146

10/2/2020 12:18:42 PM

Seasons of AI | 147 Loiseau, Jean-Christophe. “Rosenblatt’s Perceptron, the First Modern Neural Network.” Medium. 11Mar. 2019, towardsdatascience.com/rosenblatts-perceptron-the-very-first-neural-network37a3ec09038a. “New Navy Device Learns by Doing: Psychologist Shows Embryo of Computer Designed to Read and Grow Wiser.” New York Times. 8 July 1958, www.nytimes.com/1958/07/08/archives/newnavy-device-learns-by-doing-psychologist-shows-embryo-of.html. Schuchmann, Sebastian. “History of the Second AI Winter.” Medium. 12 May 2019, towardsdatascience.com/history-of-the-second-ai-winter-406f18789d45. Schuchmann, Sebastian. “Probability of an Approaching AI Winter.” Medium. 17 Aug. 2019, towardsdatascience.com/probability-of-an-approaching-ai-winter-c2d818fb338a. “Survey X: Artificial Intelligence and the Future of Humans.” Elon University. 2020, www.elon. edu/e-web/imagining/surveys/2018_survey/AI_and_the_Future_of_Humans.xhtml.

Robotics_2.indd 147

10/2/2020 12:18:46 PM

Robotics_2.indd 148

10/2/2020 12:18:49 PM

HANDS FREE The Search for Autonomous Transportation (1930s–Present)

11

Introduction Driving is a dangerous business, causing more than a million deaths around the world each year. The cities and towns of the world are carved into networks of roadways to accommodate the automobile, which still forms the lifeblood of human culture. Automobiles carry resources and supplies through the network of a nation’s supply chains and carry millions of workers to and from work. Ever since the car was invented, engineers have been looking for ways to make automobile travel safer and more efficient. One idea for how this could be achieved was to take driving out of the hands of people. Human error and impatience are the primary drivers of both vehicular death and inefficiency. Distracted drivers pose a threat to pedestrians and other motorists, while aggressive and otherwise bad drivers are the main cause of traffic jams and other inconveniences. It has long been realized that an automated system could handle automobile transportation better than the human mind and body, so inventors and engineers have been trying to automate the process. In the 2020s, automated automobiles are closer than ever to being a reality, yet there is a surprising level of resistance from the population, some of it motivated by fear of machines and automation and some a reaction to the loss of control and what could be the end of the automobile as a hobby and lifestyle. Topics Covered in this Chapter Include: • Automation • Vehicular traffic • Automobile industry • Driverless cars • Public transportation

This Chapter Discusses the Following Source Document: Stenquist, Paul, “In Self-Driving Cars, a Potential Lifeline for the Disabled,” The New York Times, November 7, 2014

Robotics_2.indd 149

10/2/2020 12:18:53 PM

150 | ROBOTICS & ARTIFICIAL INTELLIGENCE

I

n many countries around the world, cars are a way of life. This is not only true of those who directly rely on motor vehicles for their livelihoods, such as race car drivers, test drivers, and long-haul truckers, but is also true for a vast number of Americans and others. For many, cars are a passion and a prized possession. The amount of money invested in cars each year outweighs spending on many other facets of life that some may consider more essential. For instance, revenues from a single company, Volkswagen ($270 billion in 2017), are more than the entire amount spent on education across the United States in a single year ($694 million).1 The amount of money spent on cars, especially in America, reflects the centrality of automobiles in American life. Each day, millions of Americans depend on their automobiles to get to work, school, and other activities. Over the years, cars have evolved from a technological tool into a hobby. Refurbishing and modifying vehicles is a pastime enjoyed by millions around the world. For many Americans, cars represent freedom, the freedom to “hit the open road,” which is a modern industrialized version of the same innate human fascination with travel and exploration that drove the first Europeans to what was, to them, a new world. Over the century-plus since cars were invented and became an important part of American culture, it has also become clear that cars can be bad for people. Each year, 1.35 million people die from road accidents. In many countries, the cost of vehicular accidents and deaths accounts for more than three percent of the gross domestic product. Around the world, road traffic injuries are the leading cause of death for children and young adults ages 5 to 29, and the majority of those killed are cyclists and pedestrians, demonstrating that vehicular traffic is not just a hazard to drivers, but to the general public.2 The leading reason that automotive vehicles are dangerous is human error. Humans are highly distractible and have comparatively short attention spans. This has only increased as individuals have integrated mobile technology into their lives. The United States Department of Transportation (DoT) explains that taking one’s eyes from the road to read or send a text message, for instance, may result in a driver being distracted for only five seconds, but at 55 miles per hour, this is the equivalent of driving the length of a football field with one’s eyes closed.3 Texting, talking on a cell phone, browsing the Internet; these are all ways in which modern technology has increased the potential for motorists to operate vehicles without their full attention. However, distracted driving has been a problem for the entire history of the automobile. Whether fiddling with windows or other devices, attending to a conversation with another passenger, or looking at the sights outside the windows, distraction happens to everyone, but when motor vehicles are involved even short periods of inattention can be deadly.

Robotics_2.indd 150

10/2/2020 12:18:57 PM

Hands Free | 151 Vehicular injury and death are not the only ways motor vehicles prove detrimental to human cultures. In areas with large populations, vehicles fill the space and lead to traffic jams and other congestion. Consider that a vehicle is many times larger than a person. Ten people, even standing at a responsible social distance from one another, take up only a small amount of space. Ten individuals, each inside a vehicle, take up much more space. Storing vehicles also takes up a massive amount of space, from street parking to parking lots and multi-level parking structures. The amount of space dedicated to vehicles is many times that available for people or that can be preserved as green spaces for human enjoyment and use. Further, the proliferation of motor vehicles in American and world culture is directly linked to the growth of the fossil fuel industry, the most destructive industry in global history. Waste from the operation of motor vehicles and the harvest and distribution of petroleum products is a primary driver of environmental devastation around the world, and it is a problem that is getting worse with each passing year. Environmental scientists believe that, unless human culture makes massive changes, including dramatically reducing the use of petroleum products and vehicular traffic, the entire world may face an existential crisis. From the beginning, engineers and futurists who recognized the negatives of automobiles have been searching for ways to ameliorate these problems. One of the methods suggested, long before cars were as popular and common as they have since become, was to remove drivers from the car equation. Without human error and misuse, cars could safely and efficiently navigate the roads without accidents, fatalities, and, likely, without traffic jams as well. The automated car has been one of the key imagined inventions of a better human society since the dawn of the last century, but, until the 2010s, the technology to make driverless cars a reality proved elusive. Even as the concept has come closer to being a reality, the idea of the driverless car faces resistance. For some, turning control of cars over to automated systems is frightening because of the perceived fallibility of machines. This is, of course, an illusion. For others, the idea is objectionable because it removes the pleasure of driving and eliminates some degree of control. Developers working on driverless systems, therefore, face not only technological skepticism, but must also convince a population to abandon their personal investment in cars and/or driving.

The World of the Futurama The idea of driverless automobiles predates the invention of the term “automobile” and the invention of the internal combustion engine, the key piece of technology in gasoline-powered machines. Before the gas-powered car came into being, engineers were working on electri-

Robotics_2.indd 151

10/2/2020 12:19:01 PM

152 | ROBOTICS & ARTIFICIAL INTELLIGENCE cal vehicular systems and, like the imagined electrical grids to carry power from building to building along municipal systems, some of these early engineers and visionaries reasoned that it would be possible to build a system that automatically carried electrically-powered vehicles from one place to another. This idea resurged again and again after the introduction of the gas-powered car eclipsed the early electric car industry. The primary representation of this imagined future was displayed at the 1939 World’s Fair in New York. American inventor Norman Melancton Geddes was born in Adrian, Michigan on April 27, 1893. Geddes studied at the Cleveland Institute of Art and the Art Institute of Chicago. He trained in drawing and painting and started his career in advertising in Chicago in 1913. When he married, in 1916 to writer Helen Belle Sneider, the couple combined their surnames to create “Bel-Geddes.” In the 1920s, Bel-Geddes found success as a set designer in Hollywood and New York, becoming one of the best known behind-the-scenes art designers in the business. It was from this mix of art, design, and theater that Bel-Geddes came to have a lasting impact on the American automotive industry. In 1927, Bel-Geddes was hired as an industrial designer for the Graham-Paige Motors Company. The late 1920s was a time of great hope and optimism for the future, driven by the proliferation of technological innovations. Bel-Geddes fashioned five concept models for Graham-Paige representing the “future” of car design, and this attracted the attention of the Simmons Company, which hired Bel-Geddes in 1928 to design a series of metal bed room furniture designs. Bel-Geddes’s designs first came to national prominence with a 1931 model he made to represent the “House of Tomorrow.” The home, which features a streamlined, industrial-looking design, ticked all the right futurist boxes, and some argue helped to create a new architectural trend. In the 1930s, Bel-Geddes was one of the most influential design futurists and was responsible for creating a wide variety of household designs that became remarkably popular in the decade. The pinnacle of his influence came with the 1939 World’s Fair in New York, an exposition of industrial technology. The 1939 World’s Fair was dominated by the futurism that had become influential in design and subtitled the “World of Tomorrow.” Exhibitions showcased a number of cutting-edge technologies that went on to become important industrial fields.4 Millions of visitors attended the fair over two seasons, where they witnessed all manner of technological demonstrations. The 1939 World’s Fair was an important moment in the popular concept of robotics as well, largely thanks to the display of a 7-foot-tall mechanical “robot” called “Elektro the Moto-man,” who influenced generations of engineers

Robotics_2.indd 152

10/2/2020 12:19:05 PM

Hands Free | 153

1937 Shell Oil Company “City of Tomorrow” advertisement, via Wikimedia.

Robotics_2.indd 153

10/2/2020 12:19:09 PM

154 | ROBOTICS & ARTIFICIAL INTELLIGENCE

“City of the Future” Futurama exhibit at the 1939 New York World’s Fair. By Norman Bell Geddes, via Wikimedia.

Robotics_2.indd 154

10/2/2020 12:19:13 PM

Hands Free | 155 dreaming of creating artificial beings. One of the most popular exhibits was designed by Bel-Geddes, an expansion of a 1937 “City of Tomorrow” model he had designed for a Shell Oil Company advertisement. The success of the ad brought funding for a full-sized version for General Motors, the “Futurama” exhibit, which featured panoramic views of superhighways and roadways filled with tear-drop shaped cars, Bel-Geddes’s imagined vision of what the world might be like in the far-away future of 1960.5 With his Futurama exhibition, Bel-Geddes had produced the first plan for the driverless car. In his imagined future, the roadways could both be powered and navigated according to signals passed through the road and to the vehicles. Although Bel-Geddes popularized the idea of electrified roads, other designers also proposed similar systems, some involving roads fitted with electrified tracks, some utilizing electrified systems suspended above the streets, and some proposing the use of magnetism, with cars magnetically shifted across a grid on magnetized chasses. What might have happened if the world had pursued streets that drove cars, rather than cars that drove on streets? What if the world had embraced the electric or magnetic vehicle instead of delving even deeper into the fossil fuel conundrum? The answers to these questions remain unknown, but it is likely that the world would now be a very different place. A world in which electric cars derived their power from the roads would be far less polluted for certain, as the combustion engines that drive motor vehicles are the principle cause of air pollution across most of the globe. Unfortunately, the investment that might have been needed to create automated electrical or magnetic roadways would have been enormously expensive. In the end, it was cheaper and easier for motor vehicle companies to invest in fossil fuels, relegating the smart-grid designs to a relic of an unusual time in American and world history when designers and artists dreamed of retro-futuristic solutions to the emerging industrial problems of their age.6

The Space Cart In 1961, at the height of the “Space Race,” a technological contest between the United States and Russia to develop space exploration technology, researchers again turned their minds to the problem of driverless vehicles in an effort to design systems that could be used by astronauts exploring the moon. The winning design came from Stanford University engineering student James Adams, who designed the “lunar rover” that became the world’s first truly self-driving vehicle. The “Cart,” as it was nicknamed, was outfitted with cameras and programmed to detect and follow a white line painted on the ground. This was the first time

Robotics_2.indd 155

10/2/2020 12:19:17 PM

156 | ROBOTICS & ARTIFICIAL INTELLIGENCE that “computer vision,” through a mounted camera, was used for machine navigation, and it is an innovation that carries through to modern designs for autonomous vehicles. Modern motor vehicles use cameras linked to an internal computer system for integrated technology such as “automatic parallel parking” and other driver-assistance programs, and many of these systems work similarly to Adams’s driverless NASA cart. Visual cues in a roadway— which may be other vehicles, street curbs, or painted lines indicating lanes or parking spaces—can be used by visual computer systems to automate the navigation process. The field of automated vehicle technology can be traced back to the Space Race and a politically tumultuous era that created a new wave of technological enhancement.7 Given the vast resources and early technological advancement of American engineering and mechanics, there is little doubt that America could easily have forged ahead with driverless vehicle technology. But the petroleum industry launched a campaign to maintain the American cultural obsession with the automotive industry, as it benefitted their corporations. Part of this was the idea that cars were not simply utilitarian tools but prized possessions that represented freedom and the ability to travel. The concept of self-driving cars chafes against this perception, and the search for driverless vehicle technology was absent for decades from the United States. The biggest leap forward after the Stanford Cart came from Japan, where increasing urban congestion motivated Japanese company Tsukuba Mechanical to conduct a series of ground-breaking experiments in autonomous vehicles. In 1977, Tsukuba created an automated car that could travel as fast as 20 miles an hour without human intervention. The vehicle utilized front and rear-mounted cameras to read and interpret markings in the road. Tsukuba demonstrated the consumer potential of the ideas first put into place more than a decade and a half earlier at Stanford University. The design, though basic and impractical by modern standards, was proof of the concept of computer-vision-controlled automation and so provided further justification to pursue this path. German engineer Ernst Dickmanns took this to the next level ten years later by introducing the revolutionary VaMoRs to the German roadways. Dickmanns was working with Bundeswehr University in Munich when he and his team developed the world’s next major step in autonomous driving, which depended in part on advances in computational size and power over the previous decade. The breakthrough that enabled Dickmanns to develop his VaMoRs vehicle was the BVV2 pre-processor, a new system for image processing that provided the power to analyze abstract features relevant to motion control. The system split this technological process (which can now be handled by

Robotics_2.indd 156

10/2/2020 12:19:21 PM

Hands Free | 157 the technology available on an average smartphone) between several different processors, which was necessary at the time to increase reaction time for the computer. Ultimately, the team was able to create a motion-control system able to process navigational information within 20 milliseconds.8 In place of the simple single camera systems used in the Tsukuba designs, the VaMoRs system employed an entire bank of cameras at the front and back of the vehicle. A key development was a “dynamic” detection system that enabled the computer-vision system to filter out data that was unlikely to be relevant and to focus only on key objects and signals. A series of road tests proved that the vehicle could navigate basic driving conditions as well or better than a human driver, but the most famous signifier of the design’s potential came when Dickmanns proved that their automated design was capable of operating at high speed, with the VaMoRs system successfully navigating the famous German Autobahn highway and reaching speeds of up to 60 MPH without sacrificing navigation accuracy.9

From Welfare to Warfare and Back Again Early and modern autonomous vehicle designers often imagined a world without traffic jams or accidents in which computerized systems eliminate human judgment to create optimized systems that increase efficiency and reduce risk, but this is not the only direction that autonomous vehicle designers have taken. The first automated combat and surveillance drone was introduced to the world in 1995. The MQ-1 Predator, designed by General Atomics, was the forerunner of a new age in warfare. Since the 1990s the United States has increasingly relied on automated weaponry to the point that the ongoing “War on Terror” that began in 2001 has been fought, almost exclusively, utilizing drones and automated weapons. The use of drones has become one of the major issues in the ethics of modern warfare and in the emerging debate on the ethics of artificial intelligence, automation, and robotics. The Bureau of Investigative Journalism has identified at least 14,000 drone strikes launched by the United States between 2004 and 2020 and determined that at least 8.8 million have been killed in drone strikes, with at least 900 to 2,200 civilians and 283 to 454 children among the casualties.10 The War on Terror created vast amounts of funding for new military technology, and this caused a surge in military automation and robotics. In a familiar pattern, the funding made available for military projects eventually filtered into the public realm as well. Beginning in 2004, the Defense Advanced Research Projects Agency (DARPA), the most important research and development department within the US military, conducted an annual contest

Robotics_2.indd 157

10/2/2020 12:19:25 PM

158 | ROBOTICS & ARTIFICIAL INTELLIGENCE for designers and companies developing driverless and automated vehicle technology. The contest, from 2004 to 2013, provided a way for the military to see and potentially co-opt some of the best emerging technology from American universities and corporations while also providing a rich community for the sharing of technology. By the mid-2010s, automated driving technology was no longer a futuristic fantasy. Companies like Tesla introduced the idea of partially autonomous car technology in 2015 with a program that enabled vehicles to self-navigate in certain situations and could be uploaded to previously sold Tesla vehicles. This technology was essentially an advanced form of cruise control and was an example of the commercial bridge between the idea of autonomous vehicles and the modern vehicle market. Other technology companies, like Google, have worked on automated driving systems of their own, and experiments demonstrate a high level of safety and proficiency. If the technology has reached this stage, why aren’t driverless cars widespread in the consumer market? The primary factor is consumer preference.

Google has experimented with autonomous vehicles. Above, Google’s in-house automated car. By Grendelkhan, via Wikimedia.

Robotics_2.indd 158

10/2/2020 12:19:28 PM

Hands Free | 159 In general, public opinion polls have shown that Americans are more worried than excited about driverless vehicles. A Pew Research study from 2017 found that 14 percent of Americans said they were “very worried” about the technology, as compared to 11 percent who reported being “very excited.” The proportion of Americans who were “somewhat worried,” at 39 percent, was also higher than the proportion who were “somewhat excited,” 28 percent. Altogether, only 44 percent of Americans were interested in riding in a driverless car, though the proportion of Americans interested in the technology was far higher (52 percent) among urban residents more familiar with the logistical problems, dangers, and delays caused by cars in their communities. In rural areas, interest in driverless cars sat at only 36 percent, which is lower than would be necessary to convince motor vehicle corporations to begin taking the next steps toward making driverless vehicles a consumer reality.11 Among those Americans concerned about driverless vehicle technology, many mistakenly worry that the technology might be dangerous. Although it is possible that driverless vehicles may introduce new dangers to the roadways, it is extremely unlikely that driverless systems will be as dangerous as a human operating a vehicle. One of the major reasons that Americans and other world citizens are resistant to the idea of driverless cars is that the introduction of this new technology may change what cars mean and how they are used in the world. Driverless cars supplant human control and volition. Embracing the technology, in many ways, encourages a difficult psychological surrendering of control, which some may see as admitting that they, as biological creatures, are inferior to machines. Recognizing that machines, especially purpose-built machines, are superior to humans in many regards need not cause psychic dissonance, but many react to this realization with fear and insecurity. There are many Americans who wish to maintain the illusion of superiority to machines or at least to maintain the perception of control in as many aspects of their lives as possible, even if it means continuing to engage in the mundane process of driving rather than utilizing that time differently by turning over control to a computerized system that could, likely, shorten the duration of journeys and provide superior safety. Even though American consumers are still conflicted about the idea of driverless cars, the technology is being developed for future implementation. There are many reasons for this, including the possibility of adding partially autonomous systems as an option for drivers in the near future, but there are also applications for driverless vehicles that could provide distinct benefits. For instance, some developers argue that the technology behind driverless vehicles could provide serious benefits to the proportion of Americans and other global citizens with disabilities or other conditions that prevent them from safely operating a vehicle.

Robotics_2.indd 159

10/2/2020 12:19:33 PM

160 | ROBOTICS & ARTIFICIAL INTELLIGENCE In this 2014 article from the New York Times, journalist Paul Stenquist discusses the history of the driverless car and how the technology might provide a lifeline for those who are unable to utilize motor vehicle advantages as they currently exist.

“IN SELF-DRIVING CARS, A POTENTIAL LIFELINE FOR THE DISABLED” by Paul Stenquist The New York Times, November 7, 2014 Source Document

The self-driving car, embraced as a stress-reducing convenience for harried drivers and a potential advance in road safety, could also prove to be a life-changing breakthrough for many people with disabilities, granting them a new measure of independence. While much of the necessary technology is well along in development, those awaiting vehicles that can provide unassisted transportation will have to be patient. Self-driving cars have been the stuff of science fiction and experimentation since the early days of the automobile. In 1925, Time magazine carried an article about a car that cruised New York City streets without a driver, guided by radio control. The General Motors Futurama exhibition at the 1939 New York World’s Fair depicted a future of self-driving cars by the industrial designer Norman Bel Geddes. The Bel Geddes vision, with its implications for the disabled, may be getting closer to reality. Automakers have demonstrated cars capable of self-driving operation, and in August the chief executive of Nissan, Carlos Ghosn, said the automaker would offer a car with “autonomous drive technology” by 2020. Mr. Ghosn did not promise a vehicle that could be operated without a driver at the wheel, and a Nissan spokeswoman, Wendy Payne, said the

Robotics_2.indd 160

company had not studied the disability issue. Confirming that Nissan’s first self-driving car would require an able driver, she said that all automakers were taking that approach: “At this point, the driver has to be able to operate the vehicle.” G.M. is among the makers demonstrating self-driving prototypes, and a driver-assistance technology that it calls Super Cruise, to be introduced in 2017 model Cadillacs, makes partly autonomous operation possible on the highway. Still, the company is reserved in its optimism. “We believe that one day there will be fully automated cars that drive themselves under all circumstances,” a G.M. spokesman, Dan Flores, said about the potential of driverless vehicles for the handicapped. “A lot of societal benefits are possible, but we’re years away from achieving those benefits.” Audi recently obtained a permit to test selfdriving cars on California roads. But the cars are equipped with manual controls so a driver can take over if necessary. “Present-day tech developed by every automaker and accepted by state laws requires human ability to take over,” a company spokesman, Brad Stertz, wrote in an email. “Fully autonomous driving is mostly a human generation away, no matter who is

10/2/2020 12:19:37 PM

Hands Free | 161

continued

making promises.” Google is making promises, or at least offering suggestions. The company, which declined to provide an interview for this article, has developed two prototypes. The first was a standard vehicle fitted with sensors, hardware and computers that enabled self-driving. Equipped with steering wheel and brakes, it could be operated by a backup driver in an emergency. The second-generation Google car is entirely driverless and has no steering wheel or brake pedal. Driver intervention is impossible, even in an emergency, so its design would be appropriate for people physically unable to operate a vehicle. In a blog post last April, Chris Umson, director of Google’s self-driving car project, said the company was growing more optimistic about reaching an achievable goal—“a vehicle that operates fully without human intervention.” In a statement provided to The Times, Google said that the potential of a self-driver to help those with disabilities could be realized only if the human operator were taken out of the equation. The company maintains that denying the driver an active role in vehicle control will also eliminate human error and improve driving safety. Steve Mahan of Morgan Hill, Calif., executive director of the Santa Clara Valley Blind Center, has ridden in both of Google’s prototypes. Mr. Mahon, 61, is legally blind. Google contacted him about three years ago to discuss how self-driving vehicles might benefit the disabled. He was offered a chance to experience Google’s retrofitted Toyota Prius and provide feedback. He has since ridden in the new driverless prototype. “My read on Google’s approach is, how do you create vehicles that are smart enough to drive on existing roadways,” Mr. Mahan said in a phone

Robotics_2.indd 161

interview. “They’re looking at a paradigm shift in transportation, where the vehicles assume all of the driving tasks for efficiency and safety.” He added: “I’ve been in the Google cars quite a bit in urban traffic and highway. The technology is incredibly capable. They drive like good drivers. I spoke to a programmer who helps define how the car behaves. I told him, ‘So you’re the person who made these cars drive like my wife.’ ” Google’s driverless vehicle is in testing. Because California law requires that self-driving vehicles be fitted with backup manual systems, the company has installed a temporary steering wheel and controls. On a website devoted to the car, Google said, “We’ll remove these manual controls after the prototypes have finished being tested and permitted, because our vehicles are ultimately designed to operate without a human driver.” The marketplace—together with regulators— could have the final say as to what type of autonomous car is ultimately offered for sale and whether the vehicle could accommodate those incapable of controlling the car manually. Research doesn’t suggest that a clear majority of drivers are eager to turn the wheel over to a robotic driver. A survey by the Pew Research Center and Smithsonian magazine, for example, found that 48 percent of Americans would be interested in a self-driving car; 50 percent would not. The survey did not attempt to determine whether those who were accepting of self-driving technology would be willing to relinquish all control to a vehicle without a steering wheel or brake pedal. The National Highway Transportation Safety Administration has not issued regulations for selfdriving vehicles, and it would not comment for the record in regard to potential rules. But it has issued guidelines for on-road testing and advised the states in regard to on-road operation.

10/2/2020 12:19:41 PM

162 | ROBOTICS & ARTIFICIAL INTELLIGENCE

“In Self-Driving Cars, a Potential Lifeline for the Disabled” continued In a statement, the safety agency said that fully automated vehicles required further testing and should be equipped with backup steering and brake controls during the testing period. That means that a driver must be capable of taking control of the vehicle—an action that may not be feasible for people with severe disabilities. Not everyone who lacks the mobility needed to drive a conventional car wants a driverless car. Bruce Chargo, a 55-year-old financial planner from Clio, Mich., who is paralyzed below his chest and has no control of his hands, feet or back muscles, drives specially equipped vehicles with his upper arms and head.

“A self-driving car isn’t for me,” he said in a telephone interview. “There are very few things I can do independently, but I can drive. If selfdriving cars reach dealer showrooms, funding for vehicles like mine might not be available. That concerns me.” But Mr. Mahan, the director of the center for blind people, wants to be mobile as well, and alternative-control systems can’t enable a visionimpaired driver. “I miss driving,” he said. “My experience with Google has been terrific, and I want it to happen. Everyone in the blind community wants it to happen.”12

As much as Americans, and many others cherish cars and car culture, cars have proven highly problematic. From the more than one million killed and hundreds of thousands injured each year to the tons of smog spewed into the air from automobile engines each year, the automobile is one of the most destructive technologies ever created. On the flip side, cars provide opportunities for connection, travel, and exploration and are one of the key technologies that have broadened American lives. As the technology of driverless cars advances, engineers and corporations must balance the potential for solving the problems created by using cars with consumers’ emotional attachment. There is little doubt that driverless cars will become a consumer technology in the future, but achieving this may take some time and may occur only as attitudes change in America and around the world.

Robotics_2.indd 162

10/2/2020 12:19:46 PM

Hands Free | 163

CONCLUSION In the 1960s, some Americans imagined that they would be using self-driving, or perhaps even “flying,” cars within a couple of decades, but this did not come to pass. Advancements in self-driving car technology stalled for a number of reasons, including cost and waning interest. However, in the 2010s, realistic automated driving technology became more prominent in the United States and in other countries. Though it has proven possible to create self-driving automobiles, the advance of the technology is still limited because of consumer preference and how deeply integrated into our culture cars have become.

DISCUSSION QUESTIONS • Will self-driving cars become common in America? Why or why not? • Would you consider utilizing a self-driving car? Why or why not? • How did the advancement of robotic vision advance research into autonomous vehicle technology? • How was the self-driving car market related to the US Space Race? Use examples from the text. Works Used “Drone Warfare.” The Bureau of Investigative Journalism. 2020, www.thebureauinvestigates. com/projects/drone-war. Earnest, Les. “Stanford Cart.” Stanford University. Dec. 2012, web.stanford.edu/~learnest/sail/ oldcart.html. Kalan, Elliott. “The Original Futurama: The Legacy of the 1939 World’s Fair.” Popular Mechanics. Hearst Magazine Media. 11 Mar. 2010, www.popularmechanics.com/technology/design/ a5322/4345790/. Nguyen, Tuan C. “History of Self Driving Cars.” Thought Co. Dot Dash. 30 June 2019, www. thoughtco.com/history-of-self-driving-cars-4117191. “Road Traffic Injuries.” WHO. World Health Organization. 7 Feb. 2020, www.who.int/news-room/

Robotics_2.indd 163

10/2/2020 12:19:50 PM

164 | ROBOTICS & ARTIFICIAL INTELLIGENCE fact-sheets/detail/road-traffic-injuries. Schaub, Alexander. Robust Perception from Optical Sensors for Reactive Behaviors in Autonomous Robotic Vehicles. Springer Vieweg, 2017. Smith, Aaron, and Monica Anderson. “Americans’ Attitudes Towards Driverless Vehicles.” Pew Research Center. 4 Oct. 2017, www.pewresearch.org/internet/2017/10/04/americans-attitudestoward-driverless-vehicles/. Stenquist, Paul. “In Self-Driving Cars, a Potential Lifeline for the Disabled.” New York Times, 7 Nov. 2014, www.nytimes.com/2014/11/09/automobiles/in-self-driving-cars-a-potentiallifeline-for-the-disabled.html. Taylor, Alan. “The 1939 New York World’s Fair.” The Atlantic. 1 Nov. 2013, www.theatlantic.com/ photo/2013/11/the-1939-new-york-worlds-fair/100620/. Torchinsky, Jason. Robot, Take the Wheel: The Road to Autonomous Cars and the Lost Art of Driving. Apollo Publishers, 2019. “U.S. School Spending Per Pupil Increased for Fifth Consecutive Year, U.S. Census Bureau Reports.” Census. 21 May 2019, www.census.gov/newsroom/press-releases/2019/schoolspending.html. “What Is Distracted Driving?” NHTSA. U.S. Department of Transportation. 2019, www.nhtsa. gov/risky-driving/distracted-driving.

Robotics_2.indd 164

10/2/2020 12:19:54 PM

ROBOTIC JUSTICE The Use of Robots in Security and Policing (1970s–2010s)

12

Introduction Since antiquity, mechanical and artificial beings have been imagined as protectors of humanity, utilizing their superhuman strength and invulnerability to intervene in times of extreme threat. The vision of robots as protectors is still relevant to the evolving field of robotics, with a new generation of corporations and military-funded robotics projects seeking to create the next generation of robotic soldiers and security enforcers. Though technology undoubtedly enhances security as well as the ability to address some serious threats, the issue is also controversial. As with the debate over the use of robotics and AI in warfare, some argue that a human element is necessary to ensure that law enforcement is handled in a humane manner. Some argue that since the computerized application of the law lacks empathy, emotion, and the capacity for subjective judgment and evaluation, it creates potential moral and ethical problems that should be addressed before the industry moves ahead. Further, it has been argued that the implementation of robotic security systems may be biased as robots and other mechanized security systems are programmed to reflect the prejudices of human programmers and operators. Topics Covered in this Chapter Include: • Automation • Security systems • Ethics of automated policing • Institutionalized prejudice and racism

This Chapter Discusses the Following Source Document: Metz, Rachel, “Rise of the Robot Security Guards.” MIT Technology Review, November 13, 2014

Robotics_2.indd 165

10/2/2020 12:19:58 PM

166 | ROBOTICS & ARTIFICIAL INTELLIGENCE

A

s discussed in earlier chapters, the dream of mechanical and artificial beings extends as far back as historians can delve into the philosophy that gave rise to modern Western civilization. Depictions of artificial beings in ancient history reveal the underlying motivations that still drive the fields of robotics and artificial intelligence. In many cases, ancient automatons were depicted as machines created to expand human abilities and to address threats that humans were unable to face. Automatons and artificial beings are frequently depicted in ancient fiction and legend as protectors, created to defend humanity against deadly threats. This has remained a common goal in robotics as the field evolved from science fiction to science fact, and in the 2020s a number of corporations and government-funded agencies have invested heavily in creating robots and other intelligent machine systems for the purposes of security and defense.

The Talos Principle Talos, the famed defender of the Greek island of Crete, has been fancifully called “the first robot,” by historians. The legend of Talos has numerous forms, reflecting the diverse culture of mythopoetic storytelling characteristic of the ancient Greek city-states, but the most common comes from his role in the Greek story of Jason, a hero chosen by the Gods for a divine purpose, and his stalwart crew, the Argonauts.1 The story of Jason and the Argonauts takes place before the Trojan War. After Jason’s father is killed by his sinister uncle Pelias, Jason is raised by Cheiron, a centaur (half-horse, half-human). At 20 he returns to claim his throne, but Pelias insists that Jason prove himself by bringing back the Golden Fleece, the skin of a mythical golden ram believed to have magical powers. Jason gathers a crew of heroes, holding games to choose the best and bravest in Greece, including such notable figures as Castor and Pollux and the superstrong Hercules. The story of Jason’s journey contains many mini-adventures. On the way home, after victoriously claiming the Golden Fleece, Jason and his crew are forced to pass by the island of Crete, which is guarded by a giant bronze warrior named Talos.2 Versions of Talos’s history differ according to the source. In some retellings, Talos was the last of a race of giant bronze men that once lived on the island. In the Argonautica, Greek historian Apollonius Rhodius states that Talos and the other bronze giants spring from ash trees and were the sons of the Titan Cronos. The more common iteration of Talos’s story cements the giant bronze guardian as a key mythical figure in the history of robotics. In this version, Talos was created by Hephaestus, the Greek god of the forge and metalworking and later the patron of engineering. Hephaestus created Talos to protect Crete and gave him

Robotics_2.indd 166

10/2/2020 12:20:02 PM

Robotic Justice | 167 as a gift to King Minos. His hulking metal frame was animated by the magical substance “Ichor,” also known as the “blood of the gods.” This fluid center proves to be Talos’s undoing when members of Jason’s crew guided by the witch Medea, trick Talos into opening a valve on his ankle that allows the ichor to seep from his body, robbing him of life.3 While Talos primarily appears as a foil for Jason and his crew, the legend of Talos shaped current perceptions of artificial beings. Talos was imagined as a giant, far larger than any human. His outer covering was bronze, making him immune to the weapons of the Greeks. Talos’s purpose was to protect the island of Crete from pirates, a persistent danger for the people of the Grecian isles. The giant Talos was able to simply cast boulders into the ocean from the shore, destroying pirate vessels before they could come ashore. Other Greek myths also depict automatons as guardians. Some of the earliest artificial beings in Greek legend, the mechanical metal dogs Khryseos and Argyreos, were guardians and watchdogs. The choice of metal machinery not only reflected Hephaestus’s role in the Greek pantheon but also the desire to create an effective defender. Unlike a biological creature, the mechanical dogs of Greek myth would be impervious to spears and arrows. This was part of the earliest visions of automata, machines whose capabilities for defense and security transcended those of humanity and the natural world. They were imagined as divine defenders, created by magic but manifesting the qualities of natural elements (like metal and stone) that could be found in Greece. In many ways, these legends are the fusion of quasi-religious imagination and visions of possible futures for the well-known and cherished fields of metalworking and engineering. Another myth often cited as an important step in the history of robotics comes from Jewish mythology and is decidedly mystical in character. The story of the golem appears in many forms in Jewish myth and legend. A golem is a monstrous figure, usually described as being formed from clay or dirt and brought to life by the insertion of a holy scripture in the figure’s head. Golem stories are used in different ways, but the familiar story of the golem of Prague is often taken as the archetype for this family of myths. The term “golem” is derived from a Hebrew term meaning “shapeless man” or “imperfect.” The term may also be taken to mean “body without a soul,” a distinction important to the way that golem myths evolved over the centuries. The Czech version tells the story of Rabbi Judah Loew ben Bezalel, an important Rabbinic figure in the city of Prague between 1513 and 1609. At the time, Judaism was under threat from Christianity, a fact that has been incorporated into Judaism’s mystical mythology. For

Robotics_2.indd 167

10/2/2020 12:20:06 PM

168 | ROBOTICS & ARTIFICIAL INTELLIGENCE centuries, one of the biggest accusations levied against the Jewish people was the claim that they killed and drained the blood of Christians to use in their religious rituals, often especially in the preparation of Passover bread. There are many examples from history of attacks against Jewish people inspired, at least in part, by these claims. The persecution of the Jewish people was common in Prague, so the legend of Rabbi Judah Loew ben Bezalel imagines the Rabbi invoking divine magic to protect his people. The Rabbi is said to have created a large, powerful clay figure, featureless except for its arms and legs, out of clay. To bring the creature to life, a scroll printed with the divine name of God was placed inside the golem’s head. In some versions of the myth, the golem was initially created to help with farm labor, being much stronger and more powerful than humans. Other versions specifically say that the golem was created to defend against rampaging Christian violence, and specifically against the acolytes of a new Christian priest. In versions of the story depicting the golem as a magical defender, there are shades of the same speculation as that of ancient Greek and Roman writers. But the story of the golem also demonstrates another side of the issue. In many versions of the story, the golem eventually goes on a rampage, threatening the lives of innocent people, and the Rabbi who created the monster is forced to remove the divine name from the golem’s head. This ending reflects the fear of mucking about with unknown, poorly understood forces that defy the laws of nature, and this same fear can be applied to all technology. Fear of what technology might unleash still underpins many fears about robotics and artificial intelligence and, even more specifically, about the possibility of utilizing technological forces for defense.4

Technological Golems In 1970, SRI International debuted “Shakey,” the world’s first mobile robot. Shakey could move and navigate utilizing a set of wheels, but it represented a more important step forward in the field of robotics. Shakey could also, reportedly, use reason. The engineers at SRI (Stanford Research Institute) created Shakey to try out a wide variety of new innovations in robotics. In addition to being able to move on its own, Shakey could also use a television camera, radio linked antenna, and a range finder to examine its environment. The robot’s capabilities were limited, by modern standards, but represent a major leap forward for the time. With its built-in on-board logic system, the robot was able to perform tasks that required “thinking,” in the form of examining environmental variables and planning strategies or routes to complete certain tasks.5

Robotics_2.indd 168

10/2/2020 12:20:10 PM

Robotic Justice | 169 While Shakey has been largely forgotten except by experts and amateurs with a passion for AI and robotics, the innovations introduced in SRI’s Shakey design were the essential elements needed to create real versions of the legendary mechanical guardians of myth. Like Shakey, a mechanical security system, whether a mobile robot or a disembodied thinking machine connected to sensors and cameras, must be able to interact with, evaluate, and reason based on environmental stimuli, and the rudimentary achievements made to create Shakey proved essential to this evolving field. Even as robotic systems have advanced, the idea of utilizing robots for security purposes was a slowly evolving field. The first serious effort to manufacture an automaton capable of assisting with security came fifteen years after SRI labs debuted Shakey. Another academic research organization led the way, the Massachusetts-based Denning Mobile Robotics laboratory, in partnership with the Carnegie Mellon Robotics Institute. Shakey the Robot in 1972. By SRI International, via The robot was named “Denny,” short for Denning Wikimedia. Sentry, and represented the very best surveillance and communications technology available to robotics engineers at the time. In their August 1985 article on the subject, Popular Science columnist David Lampe described Denny as one of the “real-world cousins of R2D2.” Lampe was not far off in his depiction, as Denny’s design had been inspired by science fiction versions of robots and, specifically, by the characteristically non-threatening “Droids” of the Star Wars fictional universe.6 Denny stood about 4 feet tall, with a weight of around 400 pounds. Designed to patrol an area and to record any activity, reporting to a human-monitored control station via remote broadcasting capabilities, the basic idea was to replace the need for human security guards to monitor facilities. Denny could cruise at a blistering top speed of 3 miles per hour, and could continuously patrol a pre-programmed area for up to 14 hours, covering as much as

Robotics_2.indd 169

10/2/2020 12:20:14 PM

170 | ROBOTICS & ARTIFICIAL INTELLIGENCE 10 miles, before needing to return to a station for an 8-hour recharge. Among the machine’s more advanced capabilities was the potential to set Denny up to patrol an area by following a set route, or to patrol by randomly choosing routes from among pre-set options. While patrolling, Denny could detect intruders and smoke and monitor for other variables by using microwave and infrared sensors with an active radius of 150 feet. Denny could also be sent to a specific area to investigate. The high resolution camera mounted to the robot’s “head” would remain active, but had been programmed to record only when an unusual signal or phenomenon was encountered. At that point, a transmitter was activated and the view through the Denning Sentry’s camera would be sent to a facility for review by a human operator. Each machine was also outfitted with an onboard computer programmed to control its movements and to evaluate and process data from the machine’s sensors. Utilizing this data, the Denning Sentry could avoid obstacles and track its trajectory. The sentry’s onboard computer kept detailed records of each maneuver completed, which would provide a record of any encounter and a way for technicians to address motility or other issues.7 Southern Steel Company of San Antonio, Texas, then one of the largest producers of steel products for the incarceration industry, ordered 600 Denning Sentries to patrol their warehouses and production facility. In 1985, the cost of a single Denning Sentry was around $45,000, about the equivalent of $107,000 in 2020. Given the price tag, the market was limited, and Denning’s security robots never became a familiar feature of the industrial world. Large corporations purchased hundreds for a variety of applications, since, unlike a human security guard, the machines would never need replacement and would never become injured or fatigued. As with many fields that have been altered by automation, the Denning Sentries were designed to replace human work at a very low level. Because of its limitations, Denning Sentries could never replace security guards in populated areas, as the sentries would send out an automated alert whenever any person was encountered. Their primary use was to replace security guards or trained guard dogs to patrol unoccupied buildings or offices. It was a limited niche, but Denning proved that this was one area where robotic technology could effectively eliminate the need for human labor. The Denning Sentry was the first real security robot, and other companies, inspired by Denning’s initial success and the buzz surrounding their achievement, soon invested in similar creations, but the field did not really advance until the twenty-first century. This is not to say that there weren’t major advancements in connected technologies, such as in robotic

Robotics_2.indd 170

10/2/2020 12:20:18 PM

Robotic Justice | 171 information gathering and motility, but simply that none of these discoveries led to practical robotic security devices. However, the incremental advancements in computer processing, navigation, and information gathering drove advancement in the field of automated security cameras and monitoring equipment and greatly advanced military applications of robotics. The robotic security industry largely languished until the pioneering company Knightscope revived the idea and debuted a new and dramatically more advanced kind of robotic sentry for commercial production.

Advertisement for a Denning security robot, via theoldrobots.com.

Knight in Shining Plastic In this 2014 article from the MIT Technology Review, science journalist Rachel Metz introduces a new generation of robotic security guards debuted by the Silicon Valley corporation Knightscope.

Robotics_2.indd 171

10/2/2020 12:20:22 PM

172 | ROBOTICS & ARTIFICIAL INTELLIGENCE

“RISE OF THE ROBOT SECURITY GUARDS” by Rachel Metz MIT Technology Review, November 13, 2014 Source Document

Startup Knightscope is preparing to roll out human-sized robot patrols.

As the sun set on a warm November afternoon, a quartet of five-foot-tall, 300-pound shiny white robots patrolled in front of Building 1 on Microsoft’s Silicon Valley campus. Looking like a crew of slick Daleks imbued with the grace of Fred Astaire, they whirred quietly across the concrete in different directions, stopping and turning in place so as to avoid running into trash cans, walls, and other obstacles. The robots managed to appear both cute and intimidating. This friendly-but-not-too-friendly presence is meant to serve them well in jobs like monitoring corporate and college campuses, shopping malls, and schools. Knightscope, a startup based in Mountain View, California, has been busy designing, building, and testing the robot, known as the K5, since 2013. Seven have been built so far, and the company plans to deploy four before the end of the year at an asyet-unnamed technology company in the area. The robots are designed to detect anomalous behavior, such as someone walking through a building at night, and report back to a remote security center. “This takes away the monotonous and sometimes dangerous work, and leaves the strategic work to law enforcement or private security, depending on the application,” Knightscope cofounder and vice president of sales and marketing Stacy Stephens said as a K5 glided nearby. In order to do the kind of work a human security guard would normally do, the K5 uses cameras, sensors, navigation equipment, and electric

Robotics_2.indd 172

motors—all packed into its dome-shaped body with a big rechargeable battery and a computer. There are four high-definition cameras (one on each side of the robot), a license-plate recognition camera, four microphones, and a weather sensor (which looks like a DVD-player slot) for measuring barometric pressure, carbon dioxide levels, and temperature. The robots use Wi-Fi or a wireless data network to communicate with each other and with people who can remotely monitor its cameras, microphones, and other sources of data. GPS and a laser ranging instrument help the robots find their way around their patrol area and avoid obstacles when on duty. When they’re taken to a new place—such as the Microsoft campus where they were patrolling earlier this month before Knightscope cofounder and CEO William Santana Li spoke at a tech event—a human with a wireless controller shows the robot around to determine the area it will patrol and let it learn about its surroundings. “You give it a base map and then it starts building from that,” Stephens says. Knightscope is one of a growing number of companies using robots to help with work traditionally done by humans (see “How Human Robot Teamwork Will Upend Manufacturing” and “Smart Robots Can Now Work Right Next to Auto Workers”), or perhaps replace them altogether (see “How Technology Is Destroying Jobs”). The trend is accelerating as robots are made ever smarter, more agile, and more adaptable to specific tasks. And while most robots do assembly-line work, Knightscope is one of a few companies betting that they could take on other tasks.

10/2/2020 12:20:26 PM

Robotic Justice | 173

continued

Knightscope may not outright replace many security guards soon—over a million of them were employed in the U.S. last year, according to an estimate from the U.S. Department of Labor’s Bureau of Labor Statistics. But the estimated hourly wage these guards earned was more than twice the $6.25 that Knightscope says it will charge for its robots, which could tempt some companies and schools to at least try them out.

If you’re the one who needs help and a robot is nearby, you can press a button near the top of its head to summon someone remotely.

The robots have a battery that could last about 24 hours on a single charge, though the K5 is supposed to monitor its battery life and wheel over to a charging pad when needed. It takes 15 or 20 minutes to refuel.

The company envisions the robots going beyond standard security applications. For example, Stephens suggests an app that would let college students request one to chaperone them across campus at night.

Though the K5 may look friendly and does not carry any weapons, it’s not meant to be messed with. If you walk in front of it, it will stop abruptly. Try to detain it, and after some time its built-in alarm will begin to chirp as a warning while sending a low-level alert to a remote monitoring center. Keep bothering it, and an ear-piercing alarm will sound as it sends another alert, prompting an operator to use Knightscope’s browser-based software to check out the status of the sensors, see what’s happening around the robot, and talk to anyone who may be there harassing it.

But Knightscope has plenty of challenges to solve, both technological and cultural, if the roboguards are to work. For one thing, they will need to prove they can be effective over time, and people will need to feel comfortable in their presence.

Stephens says several dozen potential customers are interested, including lots of security companies weary of high guard turnover. Knightscope hopes to start putting robots to work at a variety of companies in the first half of next year.

The robots will also have to work on their balance. While speaking with Stephens, I noticed that a K5 in the distance had somehow toppled over the edge of the sidewalk onto the parkinglot asphalt several inches below. A couple of Knightscope folks were needed to pull it upright. Unlike human security guards, these robots still cannot right themselves if they fall.8

Though Metz noted the obvious weakness of Knightscope security robots, in that they could be fatally toppled like some species of tortoise, Metz did not touch on the more complex issues that arise from the idea of introducing Knightscope-style sentries into American industry. Within a few years, these issues were being introduced to the American people, and the debate over robotic policing and robotic security was beginning to take shape. Knightscope robots were becoming familiar to visitors to university campuses and some industrial companies. By 2017, around 50 had been sold to users in 13 different states. The company offered several different models, from the K5, which was recommended for use outdoors, to the more limited K1, a stationary pillar that could be erected to monitor key locations like building entrances and exits. Knightscope robots meant to patrol, like the K5, were outfitted

Robotics_2.indd 173

10/2/2020 12:20:30 PM

174 | ROBOTICS & ARTIFICIAL INTELLIGENCE

Knightscope security robots in Mountain View, California. By Alison Chaiken, via Wikimedia.

with a remarkable array of sensors and cameras that enabled the machine to evaluate its environment in many ways and in real time. The robot could, for instance, read more than 1,000 license plates each minute. When encountering a human in its path, the robot is capable of saying, “Excuse me.” There are some, however, who argue that robots like Knightscope sentries represent an uncomfortable expansion of the surveillance state. A 2016 investigation by news organization ProPublica looked at facial recognition software that was gradually being implemented around the country to predict crime. An investigation of how the software works found that bias on the part of the programmers and designers had been programmed into the machine’s algorithms. This was not an intentional development, but an unfortunate result of the

Robotics_2.indd 174

10/2/2020 12:20:35 PM

Robotic Justice | 175

Knightscope autonomous data machines (left to right) K3 indoor, K5 Outdoor, K1 Stationary, and K7 Multi Terrain. By Ejr8yc, via Wikimedia.

fact that machines require human instruction and programming. Prejudice, whether based on race, gender, or other factors, can be programmed into the programs that are being used by law enforcement and security agencies to evaluate risk. As ProPublica learned, a commonly used system that computerizes what are called “risk assessments,” automated evaluations that predict the likelihood that a person arrested will reoffend in the future, proved to be biased against darker skin, consistently assigning higher-risk ratings to black offenders than to white offenders. As such, the program routinely provided faulty assessments. White offenders given low-level risk assessments offended again while black offenders assigned high-risk assessments in many cases did not reoffend as predicted.9 The problem is that programs like the one investigated by ProPublica, which is used nationally by police departments to assess risk, are likely the same kind of programs that would be used as robotics security firms take this technology to the next level, creating more advanced “thinking” machines that are capable of evaluating the risk of individuals detected through their various surveillance methods. It is not only possible, but potentially very likely, that unleashing learning robots, with algorithms that enable the machine to learn from past information given by operators and encounters in the field, could automatically transfer and amplify bias and prejudice. To make matters worse, companies developing learning systems for applications with potential impacts on human welfare and safety are not informed by experts in ethics. The field is thus developing without guidance from those trained and experienced at examining how bias and prejudice can be unwittingly encoded into a system, just as racial and gender prejudice has become ingrained into the institutions of the American state and justice system.10

Robotics_2.indd 175

10/2/2020 12:20:39 PM

176 | ROBOTICS & ARTIFICIAL INTELLIGENCE As America and other nations prepare for the inevitable transition from human-mediate to automated security and surveillance, therefore, there are some who argue that ethical considerations must play a role in shaping this transformation. Unlike humans, a machine programmed with bias and prejudice will not reconsider the subjective or humanitarian value of the way that programming is applied. It is therefore imperative that those responsible for leading America into this next wave of technological living endeavor to eliminate prejudice and bias at the beginning and monitor the process to ensure that the machines built to supplement American life do not amplify or deepen America’s ongoing struggle against prejudice.

CONCLUSION The idea of robotic or artificial security harkens back to some of the earliest myths about artificial intelligence, but the field is still in its infancy as of the 2020s. Advances in robotic security have primarily been driven by corporate and private investment, rather than public or governmental development. Though the military has invested in research that has contributed to the development of artificial security, this investment has been relatively minor in comparison to the progress made by private companies. Persistent ethical and moral questions about the use of robotic security devices also limit development in the field, and this is especially important as there are few laws directly regulating the implementation of robotic security in corporate environments. In addition, opponents of robotic security allege that further development will likely lead to increased job losses as human security officers may be replaced with robotic or mechanical counterparts.

Robotics_2.indd 176

10/2/2020 12:20:43 PM

Robotic Justice | 177

DISCUSSION QUESTIONS • What are some of the ethical issues that might be involved in developing robotic security systems? Explain your answer. • Why was robotic mobility one of the keys to developing robotic security systems? Use examples from the text to answer. • In your opinion, are robotic security guards a threat to human security guards? Why or why not? • Would you be comfortable being under surveillance by robotic systems while at your place of work or while attending school? Why or why not? Works Used Angwin, Julia, Jeff Larson, Surya Mattu, and Lauren Kirchner. “Machine Bias.” ProPublica. 23 May 2016, www.propublica.org/article/machine-bias-risk-assessments-in-criminalsentencing. Brouwers, Josho. “Talos, the Bronze Guardian.” Ancient World Magazine. 1 Oct. 2019, www. ancientworldmagazine.com/articles/talos-bronze-guardian/. Cassel, David. “Remembering Shakey, the First Intelligent Robot.” New Stack. 5 Mar. 2017, thenewstack.io/remembering-shakey-first-intelligent-robot/. Engelberger, Joseph F. Robotics in Service. The MIT P, 1989. Lampe, David. “Robot Sentries.” Popular Science, vol. 227, no. 2 , Aug. 1985, books.google.com. Metz, Rachel. “Rise of the Robot Security Guards.” MIT Technology Review. 13 Nov. 2014, www. technologyreview.com/2014/11/13/170454/rise-of-the-robot-security-guards/. Oreck, Alden. “Modern Jewish History: The Golem.” Jewish Virtual Library. 2020, www. jewishvirtuallibrary.org/the-golem. Prososki, Lisa. “In Search of Myths and Heroes. Jason & The Argonauts.” PBS. 2005, www.pbs. org/mythsandheroes/myths_four_jason.html. Simon, Matt. “The Tricky Ethics of Knightscope’s Crime-Fighting Robots.” Wired. 21 Dec. 2107, www.wired.com/story/the-tricky-ethics-of-knightscopes-crime-fighting-robots/. “Talos.” Theoi. Theoi Project. 2017, www.theoi.com/Gigante/GiganteTalos.html.

Robotics_2.indd 177

10/2/2020 12:20:47 PM

Robotics_2.indd 178

10/2/2020 12:20:51 PM

WHERE NO ONE CAN GO Robots in Dangerous Places and Dangerous Jobs (1980s–Present)

13

Introduction As discussed in previous chapters, one of the central goals for the field of robotics is to transcend the limitations of the human body and mind. Robotic systems have exemplified this goal with the use of robots and automated systems to handle jobs or to explore places too dangerous for humans to go. Since the 1980s, scientific institutions have been using robots to survey and explore some of the most hazardous landscapes on Earth, and the ability to create purpose-built machines with specialized scientific and exploratory capabilities has greatly advanced human knowledge in key areas. The idea of robotic exploration of dangerous areas became a reality in the 1980s, and a number of hazardous environment machines have since been successfully used both in research and for humanitarian rescue operations. Topics Covered in this Chapter Include: • Volcanology • Radioactive waste • Nuclear disasters • Bomb deactivation • Japanese earthquake and tsunami of 2011

This Chapter Discusses the Following Source Document: Beiser, Vince, “The Robot Assault on Fukushima,” Wired. April 26, 2018

Robotics_2.indd 179

10/2/2020 12:20:55 PM

180 | ROBOTICS & ARTIFICIAL INTELLIGENCE

T

hough the popular image of robots typically focuses on the fantasy of intelligent, thinking machines or on fantasy universes like Star Wars or Star Trek, many robots created by human engineers and the companies and institutions that employ them are purpose-built machines designed and constructed for a specific function. This is the description that best fits most robotics in the dawn of the 2020s, robots created to solve a specific problem. There are many cases in which innovative engineers have figured out ways of using robots to investigate, survey, and handle jobs that are simply too dangerous for humans. These machines not only further the advancement of science and engineering, but quite often also save lives or provide data that may help avoid or mitigate future disasters.

Robots of the Atomic Wasteland On March 28, 1979, the Three Mile Island Unit 2 nuclear reactor, located just outside the small industrial city of Middletown, Pennsylvania, experienced what experts in nuclear engineering refer to as a “partial meltdown.” The accident began at four in the morning, when plant operators were notified of a mechanical or electrical failure that prevented the operation of “feedwater pumps”—devices that shunt water to part of the reactor’s steam generators. Steam generators remove heat from the reactor core; without this, the core will overheat, causing catastrophic mechanical failures. When this occurred, the plant’s turbine-generator, and the reactor itself, responded to automatic safety systems that initiated a shutdown. Pressure in the primary system quickly began to increase. To alleviate the pressure, a pilot-operated relief valve was opened. Unfortunately, this valve should have closed after pressure returned to normal, but the valve was stuck open. The staff of the facility did not realize that this had occurred, as they received a faulty indicator of the valve’s successful closure. Cooling water was thus being ejected from the valve, causing the reactor to overheat. Historians and specialists who have studied the accident have identified a string of mechanical systems failures that exacerbated the problem and prevented emergency systems from operating as expected. The staff of the reactor, having received faulty information, made matters worse through their actions. Radiation leaked through the facility and into the surrounding area.1 In the months after the Three Mile Island meltdown, researchers studied radiation levels and how the flow of radiation may have impacted humans and other animals living in the surrounding area. Officially, no deaths were attributed directly to the Three Mile Island accident, but whether or not the meltdown has been linked to illness and death remains controversial. In truth, US nuclear energy officials were unable to determine how much radiation was re-

Robotics_2.indd 180

10/2/2020 12:20:59 PM

Where No One Can Go | 181 leased in the accident. The United States Nuclear Regulatory Commission (USNRC) continues to assert that the level of radiation released was insufficient to lead to serious illness or injury and was comparable to the amount of radiation received from familiar activities such as having an x-ray. Activists allege that the federal government and the Pennsylvania state governments took steps to hide the aftermath of the incident. It has been proven, in fact, that physicians were pressured to avoid reporting cancers and tumors and also in purposefully hiding statistics on death rates from potentially related causes. Activists called attention to these suspicious governmental activities, as well as to the fact that the infant mortality rate in the nearby city of Harrisburg mysteriously rose to triple the previous rate. Local residents conducted their own studies, suspicious of the government surveys that purported to find few, if any, potential impacts on Pennsylvania communities. Independent surveys found substantial increases in cancer, leukemia, birth defects, rashes, hair loss, and other symptoms known to be associated with radiation poisoning.2 The government’s attempts to mitigate the public reaction to the disaster did not work. There were demonstrations, investigations, and lawsuits over subsequent years that fundamentally changed the US nuclear industry. New safety precautions and systems were developed and implemented, oversight measures were intensified, and many other regulatory and safety systems and procedures were implemented in an effort to restore public confidence in the nuclear industry. One of the initial problems was that the information available to the public and to specialists trying to study the incident was misleading and conflicting. It was important to obtain more detailed information about exactly how and why the reactor had malfunctioned, if future incidents were going to be avoided, and one of the key tools that enabled researchers to achieve this was a set of mechanical investigators built by robotics pioneer Red Whittaker. William “Red” Whittaker was working with a team of engineers and roboticists at Carnegie Mellon University shortly after the Three Mile Island disaster, where he and his team built a series of robots to gather data and remove debris from the inactive reactor. Human investigation, while possible and ostensibly safe, was still too dangerous in some parts of the facility, particularly where large doses of radioactive material may have dispersed. The first of Whittaker’s robots was a rover, sent in in April of 1984. The six-wheeled robot was outfitted with lights and cameras and connected back to an operation station by a long, ribbon-like tether that both provided power for the robot and rapidly delivered video and other data back to operators safely contained away from the site.

Robotics_2.indd 181

10/2/2020 12:21:03 PM

182 | ROBOTICS & ARTIFICIAL INTELLIGENCE

Protest against nuclear power in Harrisburg in 1979. National Archives and Records Administration (NARA), via Wikimedia.

Robotics_2.indd 182

10/2/2020 12:21:07 PM

Where No One Can Go | 183 The rover was lowered into the basement of the facility through a hatch in one of the floors, and operators guided the machine through the basement chambers, which were filled with mud, water, and debris. This provided the first images of the contaminated basement. Over the next four years, the robot was redeployed many times, often outfitted with new specialized tools that allowed the machine to better handle a variety of jobs. Over the course of the rover’s years at Three-Mile Island, it was used to clean surfaces, to scoop and package samples of mud and other material, and to vacuum sludge and wastewater. Another rover was also created and deployed, this one outfitted with a device that allowed it to drill into the ground and extract a core sample that could be used to determine how deeply radiation had infiltrated the soil. A third, more complex robot, appropriately nicknamed “Workhorse,” was later built but was never deployed when it was determined that it was too complicated to use. By 1990, most of the cleanup work was done, and most of it had been completed by humans (in radiation suits) and Whittaker’s robots. The facility was permanently shuttered to allow the remaining radiation to naturally decay.3 Global faith in nuclear energy was further damaged by an April 1986 accident at the Chernobyl Nuclear Power Station in Ukraine. The disaster was more severe than the Three Mile Island accident and was the result of a flawed reactor core design and the fact that the reactor had been manned by inadequately trained staff. At least five percent of the Chernobyl reactor core was released into the environment, spewing radioactive waste across Europe. Two workers died that night, and 28 more died within weeks as a result of radiation poisoning. At least 6,500 cases of thyroid cancer were diagnosed in subsequent months in the area most directly affected by the radiation leak, and it is believed that thousands of additional illnesses and deaths resulted from the disaster. The Russian and Ukrainian government did not have the technological capacity to utilize robotic assistants in their cleanup effort, but American institutions later provided help in this regard.4 In 1999, thanks to the breakdown of the Iron Curtain and improved relations between the United States and former Eastern European powers, researchers from the Robotics Institute of the Carnegie Mellon University School of Computer Science teamed with private technology company RedZone Robotics, Inc., to assist Ukrainian researchers by donating an exploration robot, PIONEER, to help gather data on the former Chernobyl reactor site. The PIONEER robot was designed with advanced mobility capabilities and tools that enabled the machine to clear debris and to collect samples. Data collected by PIONEER’s cameras and sensors was delivered to researchers in 3D, thanks to then-cutting-edge visual data processing software. Chernobyl scientists planned on using the data regarding structural

Robotics_2.indd 183

10/2/2020 12:21:11 PM

184 | ROBOTICS & ARTIFICIAL INTELLIGENCE

ages 19—34

ages 15—18

ages up to 14

Thyroid cancers greatly increased in areas affected by the Chernobyl disaster. By ceiocaciaca—Demidchik YE, Demidchik EP, Saenko VA, et al., via Wikimedia.

damage from the explosion to create new remediation and stabilization plans to minimize exposure from the still deadly site.5 More recently, a 2011 earthquake and associated tsunami in Japan created the most devastating nuclear disaster in modern history. In this article from Wired, journalist Vince Beiser explains how robots are being used in a cleanup effort that is expected to take decades to complete:

Robotics_2.indd 184

10/2/2020 12:21:15 PM

Where No One Can Go | 185

Portraits of deceased Chernobyl “liquidators”—civil and military personnel who took part in cleanup efforts immediately following the meltdown. By MHM55, via Wikimedia.

“THE ROBOT ASSAULT ON FUKUSHIMA” by Vince Beiser

Wired, April 26, 2018 Source Document The 2001 earthquake and tsunami in Japan triggered a devastating catastrophe in one of the country’s largest nuclear power plants. The cleanup will take decades, and it’s no job for humans. THE NIGHT BEFORE the mission, Kenji Matsuzaki could not sleep. For more than a year, Matsuzaki and a team of engineers had been developing their little robot—a bread-loaf-sized, red and white machine equipped with five propellers, a transparent dome, front and rear video cameras, and an array of lights and sensors. Nicknamed Little

Robotics_2.indd 185

Sunfish, it was engineered to operate underwater, in total darkness, amid intense radiation. And after three months of testing, training, and fine-tuning, it was deemed ready to fulfill its mission: to find and photograph the melted-down radioactive fuel that had gone missing inside the Fukushima Daiichi nuclear power plant. More than six years had passed since an earthquake and tsunami hammered northeastern Japan and reduced the Fukushima facility to radioactive ruin. In all that time, no one had been able to locate the hundreds of tons of fuel inside the

10/2/2020 12:21:19 PM

186 | ROBOTICS & ARTIFICIAL INTELLIGENCE

“The Robot Assault on Fukushima” continued three reactors that had suffered core meltdowns. The uranium fuel had overheated, turned into lava, and burned through its steel container. That much was known. What happened after that was the big question. Did all the fuel flow out of the reactors, or was some still inside? Did it pile up in a heap, spread out in a puddle, spatter on the walls? Without knowing the answers to those questions, it was nearly impossible to devise a plan to get rid of it. And getting rid of it is imperative. Every day, as much as 165 tons of groundwater seeps into the reactors, becoming contaminated with radiation. And there’s always the possibility that another earthquake or some other disaster could rupture the reactors again, sending radiation spilling out into the air, sea, or both. Human beings couldn’t go into the heart of Fukushima’s reactors to find the missing fuel, though—at least not without absorbing a lethal dose of radiation. The job would have to be done by robots. But no robot had ever carried out such a mission before. Many had already tried and failed. Debris tripped them up. Yard-thick concrete walls threatened to block their wireless signals. Radiation fouled up their microprocessors and camera components. And so it fell to Matsuzaki, a shy-eyed, 41-year-old senior scientist with Toshiba’s nuclear technology branch, to help build a machine that wouldn’t end up as another one of the robot corpses already littering the reactors. Just getting the Sunfish and its support gear into position inside the enormous concrete building that housed one of the crippled reactors took two days. Four separate teams took turns setting up the control panel, cable drum, and other equipment the robot would need to function. Even in full protective bodysuits, each group of workers could

Robotics_2.indd 186

spend only a few minutes inside the structure, working by the light of portable electric lamps amid a thicket of machinery, pipes, and catwalks. When one team absorbed its maximum permitted daily dose of radiation, it was replaced by another group. Matsuzaki himself made two forays inside to put the final touches on the Sunfish, sweating inside his face mask and bodysuit in the summer heat, his nerves jumping each time his portable monitor dinged to indicate he’d received another increment of his allowable radiation dose. The plan was for the Sunfish to spend three days mapping the debris and searching for signs of the missing fuel. Matsuzaki would monitor its progress from a control room about 500 yards away. He would be joined by a half-dozen top officials from his employer, Toshiba, and Tokyo Electric Power Company (Tepco), the mammoth utility that owns the plant. His success—or failure—would be broadcast daily around the world. Beyond the immediate danger, cleaning up Fukushima remains critical to repairing the image of Japan’s energy industry. In the wake of the disaster, Japan shut down every one of its dozens of nuclear plants, which had provided some 27 percent of the nation’s power. To cover the loss, it had to massively increase imports of expensive fossil fuels. A few nuclear plants have since been permitted to restart, following years of safety upgrades, but Fukushima cost the industry much of its public support. Polls consistently show that a majority of the public opposes nuclear power. Two of Japan’s former prime ministers, including the one in office at the time of the disaster, have flipped from supporting nuclear plants to calling for their elimination.

10/2/2020 12:21:25 PM

Where No One Can Go | 187

continued

The disaster also dealt a severe blow to the global nuclear industry, which had been gaining favor even among some environmentalists as a carbonfree alternative to fossil fuels. In the aftermath of the meltdown, Germany announced it would phase out nuclear power altogether, Vietnam dropped plans to build reactors, and the whole industry was thrown on the defensive. Every proposed new reactor now has to answer the question: How do we know this won’t be another Fukushima? Small wonder that in the nights leading up to the mission, Matsuzaki was feeling the pressure. “I’ve been having nightmares about failing,” he confessed to his boss, Akira Tsuyuki. “Me too,” Tsuyuki said. Late at night on July 18, 2017, the mission start time just a few hours away, Matsuzaki lay awake, wondering whether his team’s technology would be any match for Fukushima. THE EARTHQUAKE ON March 11, 2011, was the biggest ever recorded in Japanese history, a 9.0 monster that devastated northeastern Japan and triggered a series of tsunamis that slammed into the coast, killing nearly 16,000 people. The tsunamis also knocked out power to the Fukushima Daiichi plant, shutting down the pumps needed to keep cooling water circulating in the reactor cores. Over the next several days, as Tepco engineers worked by flashlight to regain control, the fuel in three of the plant’s six reactors—Units 1, 2, and 3—melted down. Gases unleashed by the damage exploded, sending plumes of radioactive particles like iodine, cesium, and plutonium into the atmosphere. The government ordered everyone within a 12-mile radius to evacuate, with about 165,000 people eventually displaced. Government officials originally estimated it would take about 40 years and $50 billion to clean up the plant, decontaminate the surrounding area, and compensate the disaster’s victims. In December 2016, they more than tripled that estimate to $188

Robotics_2.indd 187

billion. “We have never experienced a disaster as big as Fukushima,” Hiroshige Seko, the head of Japan’s Ministry of Economy, Trade, and Industry, told reporters at the time, according to Bloomberg. “With our limited knowledge, it was very difficult to make the previous forecast.” The Fukushima cleanup is a project far bigger and more complex than those of even the world’s worst previous nuclear catastrophes. Chernobyl was literally covered up: The Soviets simply encased the whole thing in concrete and steel. Three Mile Island was tiny by comparison. Only a single reactor melted down, and none of its fuel escaped. “Fukushima is orders of magnitude more difficult,” says Lake Barrett, an American who oversaw the cleanup of Three Mile Island and who signed on as a consultant to Tepco and the Japanese government in 2013. The Hot Zone

Following the meltdown, nearly 165,000 people had to evacuate the area surrounding the Fukushima plant to avoid radioactive exposure. Today, even after extensive cleanup efforts, 50,000 people still can’t go home. In the first chaotic weeks after the meltdown, with radiation levels far too intense for anyone to work inside the reactors, Tepco scrambled to deploy robots to assess and contain the damage. Tractor-treaded bots from iRobot, drones from Honeywell, and a prototype disaster-response mech from Tohoku University scouted the rubblestrewn facility and tried to measure the intensity of the radiation. A remote-controlled concrete pumping truck was adapted so that its extendable spout could pour water into the reactors, cooling and stabilizing the overheated chambers. In the months and years that followed, Fukushima became both a market and a proving ground for ever-advancing robot technologies

10/2/2020 12:21:29 PM

188 | ROBOTICS & ARTIFICIAL INTELLIGENCE

“The Robot Assault on Fukushima” continued designed to operate in hazardous conditions. Remote-controlled front-end loaders, backhoes, and other heavy equipment were put to work breaking up radioactive debris and loading it onto remote-controlled dump trucks. A four-legged walking robot investigated the reactor buildings. Robots with 3-D scanners were sent in to gather imagery and map radiation levels. Swimming robots inspected pools where spent fuel rods were stored, taking pictures. But none of these robots were capable of penetrating the innermost areas of the reactors. In August 2013, the Japanese government assembled a consortium of public utilities and private companies, including Mitsubishi, Hitachi, and Toshiba, to create robots specifically for the most challenging environments. Dubbed the International Research Institute for Nuclear Decommissioning, it has developed some 20 machines that have been deployed onsite. Their ranks include a snakelike bot that crawled through a tiny accessway into Unit 1, then bent itself into a more stable U-shape to explore inside. Then there was the Scorpion, a tank-tread-driven machine with a camera mounted on an elevating “tail” that was sent into Unit 2. The Japanese government is bankrolling a $100 million, state-of-the-art R&D center near the nuclear plant where robot operators train on digital models of the reactors in a giant 3-D Holo Stage and on life-size physical mock-ups. But even with the massive government investment, many of the new robots still couldn’t hack it inside the reactors. The camera on one of them, sent to clear a path for the Scorpion, was shut down by radiation; the Scorpion itself got tripped up by fallen debris. The first version of the snakelike

Robotics_2.indd 188

bot got stuck; the second did better but failed to find any melted fuel. “It’s very difficult to design a robot to operate in an unknown environment,” says Hajime Asama, a professor at the University of Tokyo who was one of the first roboticists the government turned to for help. “Until we send the bot in, we don’t know what the conditions are. And after it’s sent, we can’t change it.” Kenji Matsuzaki has worked in Toshiba’s nuclear technology branch for more than 10 years, and by May 2016, when he was assigned to the team developing a robot to explore inside Unit 3 of Fukushima, he was familiar with the plant’s basic architecture. All six of its reactors are boiling-water reactors, a type designed in the late 1960s and early 1970s and found all over the world, including in the United States. They generate electricity by circulating water through their infernally hot cores, converting it to steam that is used to turn turbine generators. Each reactor has three containers set one inside another like Russian nesting dolls. The smallest container, a steel capsule about the length of a tennis court, is called the reactor pressure vessel. That’s where the nuclear fission reaction takes place, powered by fuel composed of uranium dioxide baked into ceramic pellets. This capsule is enclosed inside a primary containment vessel, a concrete and steel structure shaped like a massive light bulb, designed to capture any radiation that might accidentally escape. The containment vessel in turn is housed inside the reactor building, a concrete and metal rectangle that offers only minimal protection from radiation. Technicians in protective gear can work for short periods inside the reactor building, but they can’t enter the far more radioactive containment vessel, which is where they were likely to find at

10/2/2020 12:21:34 PM

Where No One Can Go | 189

continued

least some of the missing fuel. Building a robot that could get inside and maneuver around the containment vessel presented several unique challenges. First, the containment vessel was only practically accessible through a 5.5-inch circular maintenance opening about 8 feet above the floor of the reactor building, so the robot would have to be small. Second, because the containment vessel had been pumped full of water to cool it down, the robot would have to be able to swim. Third, since the water and thick walls would defeat wireless signals, this small, swimming robot would need to be powerful enough to move underwater while dragging as much as 65 yards of electric cable behind it. It took months of research, experimentation, and testing in Toshiba’s labs and in an enormous simulation tank at the government-run Port and Airport Research Institute to balance all these capabilities inside the little machine. Matsuzaki’s team had to try different configurations of propellers, cameras, and sensors, boost the power of the propeller motors, develop a new type of coating to make the cable move more smoothly, and ensure the whole package could withstand a blistering level of radiation. At midnight on July 19, the day the Sunfish was scheduled to make its first foray into the reactor, Matsuzaki’s alarm went off in his hotel room. He and his team were staying in Iwaki, the closest habitable city with a hotel, about an hour south of the plant. Starting their day in the dark of night was the only way to have enough time to drive to the plant, suit up in protective gear, and hold a last round of meetings before their start time. That would give them about eight hours; by noon it would be too hot inside the reactor building for the technicians monitoring the robot to do their jobs.

Robotics_2.indd 189

At about 4:30 am, a group of Toshiba techs in full protective gear darted into the reactor building. They fast-walked to the outer wall of the containment vessel and climbed a step ladder up to the opening where the Sunfish and its equipment had been pre-positioned. They unsealed the valve over the opening, then pushed in a heavy guiding pipe, with the Sunfish at its tip, all the way through to the other side. Slowly and carefully, they angled the pipe until the bot slid into the water below. Inside, it was completely dark. On their monitors in the control room, Matsuzaki’s team, connected to the Sunfish’s controls via the electric cable, could see only a narrow swath cut through the turbid water by the Sunfish’s lights. Seated at a long table, one technician “drove” the Sunfish with a videogame-type controller. Another reeled its cable in and out, keeping it taut so it wouldn’t get tangled as the bot swam this way and that. A third did his best to estimate the machine’s position using a 3-D software model of the containment vessel. Matsuzaki oversaw them all, trying to forget about the platoon of corporate officials watching over his shoulder. The first day, the Sunfish spent most of its time reconnoitering. The damage inside the containment vessel was worse than expected. Unidentifiable clumps of pebble-sized debris and pieces of halfdestroyed equipment littered the floor. But there was no sign of the fuel, and after eight hours of searching, the team pulled the Sunfish back to the surface. They gave it a rest the next day while they discussed their findings and strategized their next steps. The following morning, they sent the Sunfish back into the water. The team drove it slowly and carefully, but time after time, the bot’s powerful propellers would stir up a blinding cloud of sediment, forcing them to wait until the water

10/2/2020 12:21:38 PM

190 | ROBOTICS & ARTIFICIAL INTELLIGENCE

“The Robot Assault on Fukushima” continued cleared again. After several hours of maneuvering, and with the noon deadline looming, Matsuzaki was growing nervous. Then, something startling appeared on the monitors. “What is that?” said Matsuzaki. Everyone began talking at once and pointing to what they saw on the screens: murky glimpses of what appeared to be stalactites of something dripping like candle wax from the bottom of the reactor pressure vessel. They’d found the first signs of the missing fuel. They maneuvered the Sunfish around the area, documenting as much as possible, before pulling the bot out. When Matsuzaki declared the mission complete, the control room burst into applause. BY NOW, MUCH of Fukushima Daiichi itself, an expansive complex covering some 860 acres, is a lot safer than you’d expect. Most areas have been decontaminated to the point where full bodysuits are no longer required. The 5,000-plus workers tasked with cleaning the place up have cut down hundreds of the cherry trees that used to enliven the grounds, torn up and paved over once-grassy open areas, and scrubbed down buildings. They have covered the seafloor just off the coast with clay to seal in cesium that seeped into the mud after the disaster. Using an enormous, purposebuilt fuel-handling machine, they have removed the hundreds of spent uranium fuel rods from Unit 4, a reactor that was damaged by an explosion but did not melt down. Still, when I visited the site last December with Lake Barrett, Tepco’s American consultant, we had to put on gloves, safety glasses, surgical masks, three pairs of socks, and plastic booties over our

Robotics_2.indd 190

shoes, as well as a personal radiation detector, before being allowed inside the facility. At 72, Barrett is tall, fit, and astonishingly energetic. I first met him at the Narita airport outside Tokyo, where he bounced right off a 20hour trip from his home in Florida, joined me in a car without stopping for so much as a cup of coffee, and talked cheerily for the entire two-hour drive to Fukushima prefecture. When Barrett heard the first news reports about the disaster, he “didn’t think much of it,” he says. “There’s always so much hype around these things.” Then he saw the picture of Unit 1 exploding. “I said, ‘Holy s***. I know exactly what that was.’ I knew they were in deep doo-doo.” When the call came to help out, he didn’t hesitate. “It’s personal for me,” he says. “Japan was the only country that helped us at Three Mile Island. We owe Japan.” From atop a small hill, once covered with grass and now encased in concrete, Barrett and I survey the trio of hulking buildings outlined against the blues of the winter sky and the Pacific Ocean behind them. Remotely operated orange-and-white cranes lean over them like reverent metal giraffes. These are the reactor buildings: the intractable core of the disaster zone, the radioactive redoubts the robots must penetrate. Each poses a unique challenge. The amount and type of damage inflicted on each is different, as is the depth of the water flooding their bases. Of course, at the heart of each is a mess of melted fuel, presumed to have flowed in different ways to different places. Less than half a mile from these three reactors sits Unit 5, one of the three other reactors that had

10/2/2020 12:21:43 PM

Where No One Can Go | 191

continued

been shut down for regular maintenance when the tsunami hit. Since it escaped largely unscathed and is nearly identical to the damaged reactors, Tepco engineers use it to plan robot missions. Inside is a baffling maze of machines, ducts, cables, and catwalks. “You can see how hard it is to run the robots around in here,” Barrett says. We navigate our way through the building to the containment vessel. “That’s just like where the Sunfish went in,” he says, pointing up to an unassuming circular opening in the wall of the vessel. We enter the containment vessel and make our way through a narrow doorway into a chamber below the reactor pressure vessel. Control rod assemblies stud the reactor vessel’s underside; we have to crouch to avoid bumping our heads on them. Pointing out key areas and components, Barrett walks me through the current theories on what happened to the fuel in each of the meltdown units. “No one knows if the lava made a nice neat vertical pile or whether it flowed sideways,” he says. “Hot molten fuel could have fallen into the water and caused a steam explosion that would have blown it everywhere.” In Unit 3, at least, thanks to the Sunfish, Tepco is relatively certain about a few things. The pictures it took show that the control-rod mechanisms at the bottom of the reactor vessel disintegrated. Molten fuel mixed with melted metal dripped down through the openings they left behind, presumably creating the stalactites seen in the videos. The lava-like mixture burned through both the steel grate beneath the reactor pressure vessel and a refrigerator-sized machine used to insert the control rods, and some of it dripped down to the floor of the containment vessel. There also appear to be chunks of fuel on the vessel’s walls.

Robotics_2.indd 191

Inside Unit 3

Each reactor is made up of three containers, one set inside another, that hold critical equipment. That still leaves an awful lot unknown. At the end of the day, “how much did we learn from the Sunfish mission?” Barrett asks. “It was a step, not a leap. We’re getting closer and closer, but we have a long, long way to go.” Tepco is continuing its efforts to scout the inside of the reactors. In January, a robotic probe using a remote-controlled camera mounted on a long pole spotted for the first time what appears to be melted fuel inside Unit 2. There may be another Sunfish mission, though it won’t be the same robot that found the fuel in Unit 3. Despite emerging from the reactor undamaged, it had still absorbed a dangerous amount of radioactivity. Tepco engineers sealed it in a steel cask and interred it with other radioactive waste on the plant site. Limited and uncertain as the Sunfish’s findings are, they have helped move the ball forward. Engineers have now begun thinking about how to build the next generation of robots that will have to carry out the most complicated undertaking of all: removing the melted fuel. Their first challenge will be enabling the bots to reach their target. “These are cramped spaces filled with huge pieces of equipment that weigh many tons. You have to cut them up in pieces and pull them out,” Barrett says. One idea currently in favor is to build a massive 20-foot robot arm that would enter the reactor building on rails, reach into the reactor pressure vessel, and scoop up the fuel. Another is to send in a bot the size of a small refrigerator on tractor treads, equipped with cutting and gripping tools to wrangle debris. A second robot would lift the detritus into containers, seal it, and put it on a conveyor belt to the outside.

10/2/2020 12:21:47 PM

192 | ROBOTICS & ARTIFICIAL INTELLIGENCE

“The Robot Assault on Fukushima” continued Either system will take years to develop. Either or both might fail. Tepco has pegged 2021 as the target year to begin removing fuel debris. How long might the entire Fukushima cleanup take? “Good question. Nobody knows. No one in human history has experience with this,” says Naoaki Okuzumi, a senior manager with the decommissioning institute. “The government says 30 to 40 years. I think that is optimistic.” While the robots’ work inside Fukushima Daiichi drags on, human beings who once lived near the plant are waiting to go home. The national government has decontaminated several towns and urged residents to return. At the time of my visit in December, though, roughly 130 square miles of land was still off-limits, including the better part of a town called Okuma, perched in the hills a few miles from the plant. Yoshihiro Takada, a former resident who now works with the local government agency in charge of rebuilding, agreed to show me around. Takada spent almost his entire life in Okuma and had to escape with his wife, child, and parents when the disaster hit. They’ve relocated to another town 65 miles away. I met up with Takada in a parking lot just outside the exclusion zone, where we put on full-body Tyvek suits, face masks, gloves, socks, and booties over our shoes to protect us from the particles of cesium and strontium. Inhaling even a dust speck of one of those isotopes can be dangerous. That’s part of what makes radiation so terrifying: You can’t feel it, see it, or smell it. It can kill you without you ever knowing you encountered it. There was no one in the train station, the barbershop, the restaurants, or the stores. The modest houses and apartment buildings on the

Robotics_2.indd 192

residential streets were all empty. The only sound I heard as we walked down the middle of the deserted main street was the chirping of clueless birds who didn’t realize they’d chosen to nest in a radioactive hot zone. “I remember this place—their pizza was so good,” Takada says, gesturing at a shuttered restaurant as we walk through town. Several shop windows have been smashed by wild boar that have come down from the hills to ransack the deserted town for food. Cars sit in driveways half-hidden by overgrown weeds. Takada only occasionally checks in on his own house. “Rats are running all over it inside. There are droppings and garbage all over,” he says. The area around Fukushima is mostly scenic farmland fringed with thickly wooded hills. But drive along practically any road and you pass fields filled with rows and rows of boulder-sized, black polypropylene bags. They are filled with contaminated earth; as part of the cleanup, a layer of topsoil is being scraped up from gardens, schoolyards, and fields all across the region. Roughly 20 million of the bags are scattered around the prefecture. Many of them will eventually be moved to the outskirts of Fukushima Daiichi itself for indefinite storage, along with an ever-growing array of tanks holding the radioactive water Tepco continues to pump out of the reactors. Ultimately, there is no technology that can simply fix what happened at Fukushima. The only certainty is that it will be a slow, incremental, frustrating process that may not even be completed in Kenji Matsuzaki’s lifetime. For now, all the scientists, engineers, and their allies can do is keep the radioactivity under control, track down its source, and try to capture it. But first, they need to create the robots to do it.6

10/2/2020 12:21:52 PM

Where No One Can Go | 193

International Atomic Energy Agency staff at Fukushima Unit 4 in 2013. By IAEA Imagebank, via Wikimedia.

Containment and Disposal Another area in which robotics has been able to save humans from potential death and danger came in the 1970s through military research, with the introduction of the world’s first robot designed specifically to locate and dispose of explosive devices. Since the 1970s, the idea of bomb disposal robotics has matriculated into American popular culture. Television series and films have featured examples of explosive management machines in operations and many Americans might even recognize the familiar claw-wielding rovers by sight, but in the 1970s, bomb disposal robotics was a new and very cutting-edge field. Unlike the specialized robots designed for exploring space or maneuvering the environments of the world’s nuclear disaster sites, bomb disposal robots have long been relatively crude, and this might even be seen as a sort of tradition extending forward from the introduction of bomb disposal robotics in the 1970s. The first machine built to remotely handle

Robotics_2.indd 193

10/2/2020 12:21:56 PM

194 | ROBOTICS & ARTIFICIAL INTELLIGENCE and dispose of an explosive device was created from the modified chassis of an electric wheelbarrow by British Lieutenant-Colonel “Peter” Miller in 1972. Miller’s device was not really a robot, in the modern sense, but was more of an electrically powered cart that could be attached to a vehicle and used to tow away an explosive device without risking the life of an explosives ordinance officer. The idea quickly caught on, and private companies took over. Unfortunately for Miller, he was barred from participating in the commercialization of his idea because the UK military claimed he had simply been doing his job by developing the prototype bomb disposal device. Nevertheless, Miller’s “Wheelbarrow” robot and other similar devices were soon used around the world. In England, devices like the Wheelbarrow have been destroyed more than 400 times in operation since 1972, thus potentially saving at least 400 lives, but the early technology that Miller implemented was only the beginning of an evolving robotics subfield. The science of bomb disposal advanced alongside computational capabilities and incorporated new innovations in machine manipulation and movement. Still, it took some time for more innovative designs to come into widespread use. One of the first major advances also came through British engineering, when a device called a “pigstick,” a term for a strong water jet, was added to the initial Wheelbarrow robot, enabling the machine to disable certain kinds of bombs, rather than simply carrying or disposing of the bomb. The device enabled the robot to shoot high powered jets of water at the part of the bomb containing the power supply. If the power can be disrupted, the bomb is rendered inert. This strategy is not always successful, however, as some explosive devices have back-up detonations systems built in as well. Subsequent designs have helped to make the robots more resistant to explosive damage and more accurate with operations meant to deactivate and deWheelbarrow robot for bomb disposal. By Mark.murphy, via Wikimedia. fuse explosives.7

Robotics_2.indd 194

10/2/2020 12:22:02 PM

Where No One Can Go | 195 A variety of private and public organizations have played a role in advancing bomb disposal design. The US military, the UK military, and other military organizations tend to lead this effort, especially since the dawn of the global “War on Terror” in 2003 made explosive ordinance one of the greatest threats faced by US and UK military overseas. Meanwhile, bomb disposal robotics eventually entered civilian police agencies and spread becoming a familiar asset for many metropolitan police services. In the 2020s, bomb disposal robots have been modified with advanced traction and mobility systems that enable the machines to climb stairs and maneuver across other difficult terrain. Advanced cameras, 3D rendering systems, and other sensors advance the effort to locate and evaluate explosive devices before making contact. Modern wireless communication systems meanwhile allow operators to take manual control of bomb disposal machines, utilizing a device very much like a game system controller to direct the machine’s movement and other actions. As for the future, there are currently companies and institutions working on ways to make bomb disposal robots more flexible and maneuverable. For instance, the private robotics company Boston Dynamics has been working on a system that would enable a small robot to catapult itself over walls and even on top of certain buildings without damage. Coupled with advancements in imaging and drone technology such technology may soon enable bomb disposal machines to navigate previously inaccessible environments.8

The Most Dangerous Places on Earth In some cases, robots built to explore and withstand dangerous conditions have been used not to save human lives, but to advance human knowledge about the environment. In 1994, the Carnegie Mellon University’s Robotics Institute again pioneered the use of robots for exploring dangerous environments with the debut of a new robot built specifically to aid the science of volcanology, the study of volcanoes. The pioneering robot in this field was Dante II, a completely original robot outfitted with the best equipment to aid in the task of exploring the dangerous environment surrounding an active volcano, in this case the remote Alaskan volcano Mt. Spurr. There were two versions of the Dante system, Dante and Dante II, introduced in 1992 and 1994, respectively. Both devices represented the cutting edge of robotic mobility in the 1990s and were outfitted with advanced systems that enabled the machines to provide high-quality data to scientists from an area too dangerous for human exploration. One of the innovations of the Dante system was an eight-legged design that enabled the machine to traverse difficult terrain while remaining balanced. Automatic gyroscopic controls further enabled the machine to right itself after a fall and to detect changes in in-

Robotics_2.indd 195

10/2/2020 12:22:06 PM

196 | ROBOTICS & ARTIFICIAL INTELLIGENCE clination and pitch as it moved. While created in part to advance the science of volcanology, Dante and Dante II, which were designed with input and funding from NASA, were also intended to test the technology necessary to build robots capable of exploring exoplanet environments. Given the intense heat, pressure, and other harmful environmental conditions surrounding the volcano, the site made an excellent testing ground for a potential future exploration of the Martian landscape and, perhaps, more distant alien environments in the future. Dante and Dante II’s eight legs provided a highly adaptable mobility system, but the robot also had a repelling system installed to allow the robots to navigate even more difficult to reach locations. A barrage of sensors enabled the machine to record and transmit data to scientists located a safe distance from the volatile mouth of the active cone.9 Though the use of robots to explore volcanic hot The Dante robot system was deployed to explore Alaska’s spots may not be as immediately impactful as Mt. Spurr volcano in 1993. Photo by CMU courtesy of the using robots to investigate damaged nuclear faNational Science Foundation. cilities or to safely dispose of explosives, the advancement of scientific knowledge enables humanity not only to save lives but to combat a wide variety of challenges, threats, and problems that currently impact humankind or that may appear on the horizon. The study of volcanic systems, for instance, helps scientists to understand toxic gasses, climatic patterns, and seismic disturbances. Data from volcanic systems can be useful in creating protocols to help people avoid natural disasters. Further, by creating and testing technology in extreme environments, scientists derive knowledge that can be used to develop new generations of tools and equipment that may save lives in times of disturbance and disaster. The idea of using machines to replace humans who might otherwise need to enter dangerous environments harkens back to the very earliest fantasies of magical machines, who like Dante II or PIONEER, were evoked to face dangers that humanity could not face. It is

Robotics_2.indd 196

10/2/2020 12:22:10 PM

Where No One Can Go | 197 possible also that the study of extreme environments may be a necessary first step towards a fantastic, yet distant future. Humanity has long contemplated the possibility of expanding human civilization to the stars. While this possibility is more science fiction than reality in 2020, any chance of human settlement, terraforming, or extrasolar exploration will depend on detailed knowledge of hostile environments and the dangers that can be found there. Each advancement in the study of dangerous environments is, therefore, potentially another step toward the possibility of a human future beyond the Earth.

CONCLUSION Robotics definitely expanded human capabilities to explore and gather data in hostile environments. Many proponents of robotics and artificial intelligence are hopeful that robotic machines such as those that have been used to survey and clean radioactive waste sites or active volcanic zones may prove the key to developing the next generation of space exploration robotics and that this kind of technology may help encourage the eventual human colonization of space. In addition, the advancement of hazardous environment machines may have many other applications on Earth as well, especially as climate change expands hazardous environments.

DISCUSSION QUESTIONS • • • •

How was the Dante robot program connected to NASA and space exploration? What are some other applications for hazardous environment machines in the future? How did military funding contribute to the development of bomb disposal robotics? Is research into hazardous environment robots an important field? Why or why not?

Works Used Allison, Peter Ray. “What Does a Bomb Disposal Robot Actually Do?” BBC. Future. 15 July 2016, www.bbc.com/future/article/20160714-what-does-a-bomb-disposal-robot-actually-do.

Robotics_2.indd 197

10/2/2020 12:22:14 PM

198 | ROBOTICS & ARTIFICIAL INTELLIGENCE “Backgrounder on the Three Mile Island Accident.” USNRC. United States Nuclear Regulatory Commission. 21 June 2018. www.nrc.gov/reading-rm/doc-collections/fact-sheets/3mile-isle. html. Beiser, Vince. “The Robot Assault on Fukushima.” Wired. 26 Apr. 2018, www.wired.com/story/ fukushima-robot-cleanup/. “Chernobyl Accident 1986.” World Nuclear Association. Apr. 2020, www.world-nuclear.org/ information-library/safety-and-security/safety-of-plants/chernobyl-accident.aspx. Leary, Warren E. “Robot Completes Volcano Exploration.” The New York Times. 3 Aug. 1994, www.nytimes.com/1994/08/03/us/robot-completes-volcano-exploration.html. Lovering, Daniel. “Radioactive Robot: The Machines That Cleaned Up Three Mile Island.” Scientific American. 27 Mar. 2009, www.scientificamerican.com/article/three-mile-islandrobots/. Spice, Byron. “PIONEER Robot Is Dedicated at Chernobyl Sarcophagus.” SCS. Carnegie Mellon University. 2 June 1999, www.scs.cmu.edu/news/pioneer-robot-dedicated-chernobylsarcophagus. “The Past, Present and Future of Bomb Disposal Robots.” Europa Government. 20 June 2018, www.governmenteuropa.eu/bomb-disposal-robots/88618/. Wasserman, Harvey. “People Died at Three Mile Island.” Huffpost. 26 Mar. 2009, www.huffpost. com/entry/people-died-at-three-mile_b_179588.

Robotics_2.indd 198

10/2/2020 12:22:18 PM

MECHANICAL MEDICINE Robotics in Medicine and Health (1980s–Present)

14

Introduction One of the most promising subfields of robotics involves the application of robotic systems to medicine and medical treatment. Beginning in the 1980s, a series of pioneering companies and research organizations debuted robotic technologies to assist and to independently manage various types of medical procedures. Even though the field is still very much in its infancy after forty years of development, robotic systems have proven promising, especially in surgical applications. And despite successful applications, there are also lingering questions, including whether or not patients can and will trust robotic systems and whether or not current robotic technologies actually provide a measurable advantage over human physicians. Topics Covered in this Chapter Include: • Surgical robots • Food and Drug Administration • Robotic ethics

This Chapter Discusses the Following Source Document: McNamee, David, “Are Robots the Future of Surgery, or a Pricey Marketing Gimmick?” Medical News Today, August 1, 2014

Robotics_2.indd 199

10/2/2020 12:22:22 PM

200 | ROBOTICS & ARTIFICIAL INTELLIGENCE

I

n the fictional Star Wars films, novels, and graphic literature, there are specialized robots (called “droids” in Star Wars lingo) designed to serve in a medical capacity. The idea of the medical droid was first introduced in the second Star Wars film, The Empire Strikes Back, when a robot is seen adjusting a new synthetic hand that has been attached to Luke Skywalker’s arm after his hand was severed in a fight with his nemesis and father, Darth Vader. In the Star Wars universe, where droids are integrated into every level of culture and perform a wide variety of tasks, the idea of medical droids is not unexpected, but the concept of utilizing high-tech machinery in medicine derives from ancient reflections on the medical arts. Medicine has been a slow-growing and evolving art and science. At any stage of human existence, certain diseases and conditions can be addressed whereas a host of illnesses or injuries remain that cannot be effectively tackled through available medical techniques. While all illnesses and injuries may be worrisome, humans tend to be more frightened of those that have no effective “cure” or therapeutic system in place. Since antiquity, then, humans have imagined fantastic systems that might fill in the gaps in medical knowledge. Magic and divine power are often evoked in fantasy to provide a solution to a disease immune to existing scientific knowledge or treatment, such as the Christian myth of Jesus Christ having the power to cure the plague, a disease that ran rampant in the ancient world and devastated populations across Europe and the Middle East. The idea of medical robots comes from this same fantasy, the hope that innovative science can produce solutions to difficult medical problems. By far, the most productive application of technology to medicine has been in the creation of advanced scanning and other apparatuses that enable physicians to better detect and evaluate various disorders, and in the creation of computer systems designed to aid physicians and medical researchers in their work. But there has long been a subfield of medical technology dedicated to the idea of using robotics and artificial intelligence to advance medical science. This fantasy first became a reality in the 1980s, when robotic engineering had finally advanced sufficiently that it was possible to create the first functional robotic medical tools, but the modern evolution of the field has led in surprising new directions. In the 2020s, there are some hospitals and medical centers experimenting with utilizing robotics and artificial intelligence in new ways, and this has stimulated a discussion about the appropriate roles for technology in medicine and, especially, in patient care.

Surgery-Bots As medical science has advanced, physicians have learned to address a large and growing list of disorders through surgical techniques. Many of the issues that can now be addressed

Robotics_2.indd 200

10/2/2020 12:22:26 PM

Mechanical Medicine | 201 with surgery rely on the work of engineers and researchers, who developed highly finetuned techniques and tools that enabled surgeons to conduct ever more delicate operations. However, the human body is only capable of operating within a certain spectrum of manual dexterity, and there are limits to vision, as well. To a large degree, surgical capability has depended on the introduction of computers and computer-aided tools. In the 2020s, surgeons can utilize computer-guided and laser-powered systems to conduct operations on an extremely small scale, but, again, there is a limit to what can be achieved with the human body. Robots can theoretically be designed to perform tasks with greater accuracy, speed, and delicacy than any human can achieve, and this potential has driven the science of surgical robotics since the 1980s. The first functional surgical robot was the PUMA 560, a surgical arm that debuted in 1985, at the height of a peak in corporate interest in AI and robotic technology. In many ways, the mid to late 1980s was a golden age for robotics engineers. High-profile advancements in both fields created tremendous interest among the public and this, in turn, created new funding opportunities for engineering firms looking to break into the market and those already working on robotics and AI systems for industry and research. The PUMA, which stood either for “Programmable Universal Machine for Assembly,” or “Programmable Universal Manipulation Arm,” was developed by an engineer working at the robotics firm Unimation, one of the earliest and most influential producers of robotics for the private sector that later became a subsidiary of the Westinghouse Corporation.1 The PUMA system was much like an industrial arm that might be used for automated assembly line processes, though calibrated for a much higher degree of precision. The machine had six joints and a modifiable manipulating device. In 1985, Dr. Yik San Kwoh of Memorial Medical Center in Long Beach, California, had been working on developing a computer program that could be implemented in the PUMA system to automate an extremely sensitive surgical procedure, a human brain biopsy. In this potentially dangerous procedure, a small hole is drilled into the skull and a hollow needle is inserted into the brain to extract a portion of material for examination. Kwoh used a Computed Tomography (CT) system, which uses x-rays to create a three-dimensional view of the body’s internal structure put together from scanned “slices” of material, as a guidance system to program the PUMA 560 to place the needle in a patient’s brain, thus proving the concept of robotic surgery. The success of Dr. Kwoh’s highly sensitive neurosurgical biopsy led to a flurry of research on utilizing surgical robotics in different ways. In 1987, the PUMA 560 system was used to com-

Robotics_2.indd 201

10/2/2020 12:22:30 PM

202 | ROBOTICS & ARTIFICIAL INTELLIGENCE plete a laparoscopic surgery, which is one of a number of procedures that involve inserting a device called a laparoscope to look at organs inside the abdominal cavity. Specifically, the PUMA 560 system was utilized to perform a cholecystectomy, which is the removal of the gallbladder. The following year, the PUMA system was successfully used to perform another delicate operation, a transurethral reception, which is a procedure in which a surgical tool is inserted into the penis to remove a section of the prostate.2 The successful introduction of robotic surgical procedures in America stimulated interest in the United Kingdom. There, the robotic surgical system PROBOT was introduced at Imperial College in London and, in 1992, Dr. Senthil Nathan completed the next major step forward in the field when PROBOT was used to complete the first entirely robotic surgery, a prostate resection similar to the one completed in 1988 by the PUMA system. While the PUMA system had already proved that the system could work, the PROBOT procedure is notable in that it was the first completed without human intervention and thus the first entirely robotic surgical procedure.3 While PROBOT was being developed and tested in the United Kingdom, Integrated Surgical Supplies in Sacramento, California, was in the process of developing a new purpose-built robot to assist with orthopedic surgeries, which are surgeries involving the skeletal system. This new robotic surgical assistant, nicknamed ROBODOC, was first conceived by orthopedic surgeon William Bargar in 1985. The basic idea was to create a machine that could automatically mill out bone to allow for the attachment of prosthetics. Researchers and roboticists at the University of California, Davis, worked on the ROBODOC system from 1986 to 1992, basing their design on an industrial robot used in Japan. To make the robot work, a new orthopedic planning system needed to be created and assessed, which was called Orthodoc. Essentially, the ROBODOC system utilized a clamp to attach the machine to a patient’s leg while the operation is ongoing. A movement sensor was able to cease the operation if the patient’s leg, for any reason, started to move. The machine was also equipped with a system that enabled it to record details of a patient’s anatomy, which could then be used to plan a surgical procedure.4 In 1992, the ROBODOC system was used on a human for the first time, in a procedure known as Total Hip Arthroplasty (THA). After a series of human volunteer studies, ROBODOC became the first, and, as of 2020, the only robotic machine cleared by the Food and Drug Administration (FDA) for use in orthopedic surgery. The ROBODOC system proved to be one of the most lasting robotic surgical tools, and the company and basic system is still in use

Robotics_2.indd 202

10/2/2020 12:22:33 PM

Mechanical Medicine | 203 as of 2020 not just in the United States but across Europe and in some parts of Asia. Subsequent models were outfitted with more advanced equipment and digital, web-based communication and data sharing systems were later integrated as well. Though the surgical robotics field initially inspired a great deal of enthusiasm, development of functional surgical machines has been slow. As of 2020, the most advanced surgical robotics system is still the da Vinci Surgical System, created by the Sunnyvale, California-based Intuitive Surgical Inc. and introduced to the market in 2000. The da Vinci Surgical System was approved by the FDA in 2000, and the machine has since been used around the world. Though initially designed for laparoscopic (abdominal) surgeries and procedures, the machine was adapted, and approved by the FDA, for use in thoracoscopic (chest) surgeries and procedures as well. Over the 20 years since the machine was intro- A da Vinci system laparoscopic robotic surgery machine. By Nimur, via duced, versions of the da Vinci system Wikimedia. have been modified to perform a variety of different procedures in urology, gynecology, pediatric surgery, and otolaryngology. Advancements in robotic motion and motion control have empowered new iterations of the system to the point that the da Vinci system is, in 2020, capable of performing far more delicate and accurate microsurgical techniques than a human surgeon. But there are still many reasons that some doubt whether surgical robotics really is the wave of the future in human medicine.

Robotics_2.indd 203

10/2/2020 12:22:37 PM

204 | ROBOTICS & ARTIFICIAL INTELLIGENCE

A da Vinci patient-side component (left) and surgeon console. By Cmglee, via Wikimedia.

In this article from a 2014 issue of Medical News Today, David McNamee discusses the pros and cons of robotic surgery and what such a comparison might mean for the future of the surgical robotics field:

Robotics_2.indd 204

10/2/2020 12:22:42 PM

Mechanical Medicine | 205

“ARE ROBOTS THE FUTURE OF SURGERY, OR A PRICEY MARKETING GIMMICK?” by David McNamee Medical News Today, August 1, 2014 Source Document

Luke Skywalker had his amputated hand repaired by one. The Transformers had one that turned into an ambulance. And they have been among us on Earth, operating quietly since 1985. Robot surgeons are nothing new and they are not science fiction, though the real-life machines may not be instantly familiar to Star Wars fans. Recently, however, research has questioned whether there is evidence to support robot-assisted surgery. We investigate the issue. In 1985, the PUMA 560 robotic surgical arm successfully assisted in a delicate neurosurgical biopsy. This marked the first documented robot-assisted surgery. Two years later, the first laparoscopic procedure—a cholecystectomy— was performed using the robotic system, and in 1988, PUMA was used to perform transurethral resection. These landmark surgeries opened up the potential for a greater degree of precision in minimally invasive surgeries through the steady, mechanical hand of the robot. By 1990, the first robot approved by the Food and Drug Administration (FDA)—Computer Motion’s AESOP system—was assisting in endoscopic surgery. But it was not until 2000 that the operating room became acquainted with robotic surgery as we now know it. The da Vinci Surgery System was the first robotic system to be approved by the FDA for general laparoscopic (or “keyhole”) surgery, and it has dominated the field of robot-assisted surgery since then.

Robotics_2.indd 205

The da Vinci evolved from the simple arm of the PUMA 560 into an all-encompassing system comprising assorted surgical instruments and camera/scopic devices. Using a 3D magnification screen, the (human) surgeon is able to view the operative area with high-resolution clarity. And where the PUMA’s single arm was a bulky chunk of 1980s robotic technology, da Vinci offers four slender surgical arms, each just 1 cm in diameter. The miniaturization of the surgical instruments was heralded as a breakthrough in minimally invasive surgery. The da Vinci did not require the incision walls to be leveraged when operating, and by minimizing the amount of contact time between instrument and interior tissue, there was less risk of infection. A feature known as “Endo-wrist,” meanwhile, was designed to replicate the skilled movements of the surgeon, who would operate the machine from a set of controls. In the 14 years since its FDA approval, the da Vinci’s manufacturer—Intuitive Surgical—has released 10 updates of the system. Proponents of robot-assisted surgery say it is no fad. Indeed, in 2013, robots were used in 422,000 surgeries in the US—15% more than the year before. But in 2014, medical experts seem more divided than ever on the benefits of robot-assisted surgery.

10/2/2020 12:22:47 PM

206 | ROBOTICS & ARTIFICIAL INTELLIGENCE

“Are Robots the Future of Surgery, or a Pricey Marketing Gimmick?” continued ‘No Difference’ between Open Surgery and Robotic Surgery

In particular, a high-profile study by surgical oncologists at Memorial Sloan-Kettering Cancer Center in New York, NY, found no difference in terms of rates of complications or length of hospital stay between traditional open surgery and robotic surgery for bladder cancer. What is more, the researchers found the results so clear-cut that they stopped the trial early. They described their peer-reviewed findings in the New England Journal of Medicine. However, Intuitive Surgical objected to the Sloan-Kettering research, labeling it “grossly misleading” in a statement. Speaking with Medical News Today, Dr. Catherine Mohr, vice president of medical research at Intuitive Surgical, explained that the randomized study is misleading because it compares open surgery procedures with hybrid procedures, which are a mixture of robotic and open elements. When the researchers concluded that there were no differences in complications between the two procedures, Mohr claims that—rather than comparing the robotic and open procedures—they were actually comparing the same open technique used in both arms of the study. “In essence, this letter to the editor is comparing open surgical complications to open surgical complications with a robotic dissection, and then concludes they are the same,” Dr. Mohr asserted. “Attempting to generalize this study, which compares open to hybrid, and then to imply that the latter surgery is entirely minimally invasive is grossly misleading.”

Robotics_2.indd 206

Medical News Today contacted Dr. Bernie Bochner, lead author of the Sloan-Kettering study for comment, but he was unable to reply before our deadline. In gastric surgery, a recent Australian review of six randomized trials also found “no appreciable difference” between conventional minimally invasive anti-reflux surgery and robot-assisted antireflux surgery in complication rates, post-operative symptoms, quality of life or functional assessments. Although older studies comparing the outcomes and costs of open and robotic cystectomy have been favorable toward the robots, it seems that much recent research—with the exception of a survey of hysterectomy patients conducted by Intuitive Surgical employees—is finding few advantages to the robotic system. Evidence ‘Does Not Support the Expense’

These trials add to what appears to be a mounting disillusionment with robotic surgery, perhaps expressed most pointedly in a recent Forbes opinion piece by Dr. Robert Pearl, CEO of The Permanente Medical Group. In his column, Dr. Pearl cuts straight to the issue many recent studies into robot-assisted surgery have been driving at: “The problem is the outcome data doesn’t support the hype or the cost.” The da Vinci device is not a cheap tool. The initial cost to hospitals for the machine comes in at up to $2 million. What is more, Dr. Pearl claims, the device has a “built-in obsolescence factor,” saying that each of the robotic arms has to be replaced after just 10 uses.

10/2/2020 12:22:52 PM

Mechanical Medicine | 207

continued .

“The motivation isn’t safety,” he continues “It’s profit. The manufacturer could have built a robot that could complete 100 procedures. But that would reduce profits dramatically.”

surgery because of advertising efforts, rather than based on patient benefits and economic considerations is disrespectful to all involved and inaccurate.”

If evidence finds that the only difference between traditional surgery and robot-assisted surgery is the cost—a study in the Journal of Urology found robotic bladder removal procedures cost an average of $16,250, 11.2% more than the average cost for open surgery—why is robotic surgery becoming more popular?

A concern from some robotic surgery critics is that hospitals are increasingly finding themselves in a position where they need to commit to buying the machine purely to attract patients from rival institutions, who might otherwise perceive hospitals as being behind the times if they do not have a da Vinci.

Dr. Pearl claims “aggressive advertising” on Intuitive Surgical’s part—to consumers as well as hospitals—provides the answer.

“Hospitals in the US market directly to patients in order to compete with one another,” reasons Dr. Mohr. “They use the fact that they have a da Vinci system to highlight that they are providing better access to minimally invasive surgery, and therefore that they are providing better patient care. Ultimately, patient demand is based on the benefits afforded by minimally invasive techniques, including robotic-assisted da Vinci Surgery.”

“This device will drive up health care costs significantly in the future,” he concludes, “while clinical outcomes remain relatively unchanged.” Speaking to MNT, Intuitive Surgical’s Dr. Mohr branded Dr. Pearl “woefully wrong in his assertion.” Of his claim that Intuitive Surgical could have designed a robot that could complete 100 procedures, Mohr said it was “patently absurd” and “demonstrates a complete lack of understanding of the physics and challenges involved in instrument design.” She points out that to prevent the surgical instruments from being disposable, they must withstand multiple cleaning and sterilization cycles with harsh chemicals and high temperatures without losing their precision, wearing out or being damaged through human handling. What is more, the robotic arms themselves, she says, are not disposable and are a structural component of the machinery. She adds: “As for the triumph of ‘aggressive advertising’— to assert that hospitals and surgeons choose robotic

Robotics_2.indd 207

What Is the Future for Robot-Assisted Surgery? “Microsurgery, surgery outside the abdomen, and natural orifice surgery are all very interesting areas we have explored in the lab,” offers Dr. Mohr. “Robotics can give us greater precision, lower trauma access, changes of scale (such as being able to work on microscopic scale), as well as navigation and imaging integration to provide a ‘GPS’ of the body and allow the surgeon to more accurately find and treat lesions. These are all exciting capabilities that could result in new therapies that we can only imagine right now.” Some surgeons would like to see robotic surgery systems more capable of replicating the tactility a surgeon experiences during invasive surgery, as sensory information can be helpful in making judgement calls. Other supporters of robotics want

10/2/2020 12:22:57 PM

208 | ROBOTICS & ARTIFICIAL INTELLIGENCE

“Are Robots the Future of Surgery, or a Pricey Marketing Gimmick?” continued to see the distance between surgeon and patient increase to reduce infection—with surgeons operating the technology remotely from a separate room. One interesting robotic evolution was recently unveiled by the Massachusetts Institute of Technology (MIT), who have pioneered “squishy robots” that could have surgical applications. The robots have been designed to change between hard and soft states on command, so potentially they could squeeze around organs in a soft state, only to harden when they need to perform a surgical function. Medical News Today spoke with MIT’s squishy bot developer Prof. Anette Hosoi. We asked her, given the current cynicism toward robotics in

health care, what the incentive is for developing new surgical robots. “Technology is not magic,” she answered. “It takes a tremendous amount of time, dedication and vision to bring new technologies to fruition.” “The first computers were enormously expensive, cumbersome, and couldn’t do much more than a calculator. Imagine if no one had the vision to develop that technology simply because, at that point in time, it was neither advantageous nor cost effective! It is remarkable to me that today, robotic-assisted surgery can be as effective as traditional surgery and there is no doubt in my mind that these machines will continue to improve until they are indispensable.”5

The Robot Will See You Now While the future of surgical robotics might still be a matter of debate, developers are also moving ahead with expanding the use of robots in another arena of medicine: interacting with and potentially even examining patients. There are two ways in which hospitals have begun experimenting with using robots for patient interaction. The first is through what is known as a “Telepresence Robot,” which is essentially a robotic machine that can visit patients in a hospital or other medical setting and then can allow a physician to interact with the patient via a wireless video chat system. Telepresence robots come in many forms, but all are essentially some version of a mobile robot chassis fitted with a screen and systems that can be used to enable visual and audio two-way communication. The term “telepresence” was actually coined by AI pioneer Marvin Minsky, who wrote an article for the technology magazine Omni in 1980 describing the idea:

Robotics_2.indd 208

10/2/2020 12:23:01 PM

Mechanical Medicine | 209 “To convey the idea of these remote control tools, scientists often use the words ‘teleoperator’ or ‘telefactor.’ I prefer to call this ‘telepresence,’ a name suggested by my futurist friend Patrick Gunkel. Telepresence emphasizes the importance of high-quality sensory feedback and suggests future instruments that will feel and work so much like our own hands that we won’t notice any significant difference. Telepresence is not science fiction. We could have a remote-controlled economy by the twenty-first century if we start planning right now. The technical scope of such a project would be no greater than that of designing a new military aircraft.”6

iRobot Ava 500, an autonomous roaming telepresence robot. By Z22, via Wikimedia.

Robotics_2.indd 209

10/2/2020 12:23:06 PM

210 | ROBOTICS & ARTIFICIAL INTELLIGENCE

NASA’s Robonaut program uses virtual reality display technology to visually immerse a human operator in the robot’s workspace, via NASA.

The kind of experience offered by the telepresence robots of the twenty-first century is not exactly what Minsky was trying to suggest, but it is a reflection of that idea. Minsky was suggesting using remote-controlled devices and tools to place people in places people could not easily access. He used the Three Mile Island disaster, where radiation levels made it extremely difficult for humans to enter the site, as an example. Minsky imagined that a telepresence system might allow humans to go there, with advanced visual, auditory, and even tactile systems enabling the operator to feel as if they were actually there and to perform a variety of tasks without risking exposure to dangerous levels of radiation. The eventual remediation and exploration of Three Mile Island did, in fact, involve a crude form of telepresence in the form of the robots that were sent into the toxic site to begin cleanup operations. The telepresence robots utilized in hospitals and in industry in 2020 provide some of the connectivity that Minsky was talking about. Through wireless streaming of digital data, an operator can see and hear through the robot, enabling a person to be somewhere they can-

Robotics_2.indd 210

10/2/2020 12:23:10 PM

Mechanical Medicine | 211 not be physically. In the 2000s, the first telepresence systems were built for and implemented in hospitals as a way for overstretched physicians to visit with patients when unable to visit them. Media reports on the emerging technology also introduced other potential benefits, such as allowing physicians to have conversations with patients that might be carrying highly communicable diseases. Johns Hopkins University was one of the first hospitals to experiment with utilizing telepresence to bring doctors and patients together, specifically for conducting physician visits with patients post-surgery. In a 2003–04 study involving sixty post-surgical patients, thirty were randomly selected to receive visits from a virtual “robot doc” instead of a doctor. Through the robotic interface, the physician asked patients questions, inspected surgical sites to look for signs of any healing problems, and answered patients’ questions. After the patients were discharged, they were given a questionnaire to see how they viewed the interaction. About half of the patients reported that they would rather have a “televisit” from their own physician than have a real visit from a different, unfamiliar physician. This study then indicated that telepresence robotics might be a way to maintain higher levels of continuity of care often impossible in hospital environments. About 50 percent of the initial test subjects also expressed their belief that telepresence robotics should become a regular part of post-surgical care and 57 percent said they would be comfortable having the experience again. A full 80 percent of patients expressed optimism that implementing such a system might provide them with greater access to their own physician, and 76 percent hoped that such a system might allow their physician to provide more information to them.7 While the Johns Hopkins study, and similar studies, indicated that patients were comfortable with the idea of robot-mediated connectivity with their own physician, there are limits to this relationship; and it is likely, though untested, that patients might not be as accepting of only seeing their doctor through a robotic intermediary. The same company that designed the telepresence robots used in the Johns Hopkins study in 2004 was still selling and leasing telepresence systems to hospitals in 2019, though significant improvements had been made in the interim to make the machines faster and more responsive and to improve the visual and auditory connection between the machine, physician, and patient. However, in 2019 a viral video showed a California physician using a telepresence machine to inform a patient, 78-year-old Ernest Quintana, that there was nothing further the hospital could do to treat Quintana’s chronic obstructive pulmonary disease. Quintana died the day after receiving this news through a mobile robot, and the video sparked an online and media discussion about the ethics of utilizing virtual connectivity in such a sensitive and intimate situation.

Robotics_2.indd 211

10/2/2020 12:23:14 PM

212 | ROBOTICS & ARTIFICIAL INTELLIGENCE Writing about the controversy in Vox, physician Haider Warraich argued, “The fact that a patient and their family member were delivered devastating news via a telepresence robot is a rightfully shocking episode that runs counter to much of what many of the prophets of the digital revolution in medicine have been preaching. It has confirmed the worst fears of many patients and doctors that technology might increase the distance between physicians and patients during their most vulnerable moments. As a cardiologist training in advanced heart failure who frequently has such conversations with patients—and knowing just how complex and emotionally fraught these moments can be—I am not surprised that the patient and his granddaughter reacted with horror.”8

Though Dr. Warraich sounded an alarm about the increasing distance between patients and physicians, and that modern technology could make this worse, he also argued that such problems are really more about how technology is used rather than an inherent problem in the technology itself. At the time the incident went viral, in March of 2019, none of those reacting to it were aware that one year later many would struggle with obtaining physician care in the face of a dangerous and rapidly-spreading virus that made it difficult to engage in the kind of face-to-face contact that Warraich depicts as the center of a healthy physician-patient relationship. For Americans and others around the world living through the Covid-19 pandemic, the idea of virtual physician visits might not seem nearly as alien, nor as undesirable, even during moments when a physician might need to deliver life-changing news. The promise of telepresence is not only applicable to the medical field. Telepresence machines have also been implemented in other industries, allowing workers the power to telecommute, attend meetings, supervise projects, and so on. While tele-work does not necessitate having a robotic assistant, there are situations in which having the ability to move, as well as to see and hear, in a distant environment can prove advantageous. In 2019, Double Robotics introduced a streamlined telepresence robot option that basically consisted of a pole holding a screen mounted to a two-wheel system. This model, the Double 3, has an array of 3D sensors that allow the machine to navigate on its own and to avoid collisions whether independently navigating or being controlled remotely. A 13-megapixel camera that can be zoomed and panned remotely provides visual capabilities, while six microphones, designed to focus in on sound sources, provide the user with audio. As of 2020, a machine like the Double 3 would cost between $2,000 and $4,000 to purchase, but there are also companies that allow similar systems to be rented on a short- or longer-term basis.9

Robotics_2.indd 212

10/2/2020 12:23:18 PM

Mechanical Medicine | 213 The applications for currently available telepresence machines are limited, and it is arguable that these devices will appeal primarily to technological playfulness and curiosity as well as a genuine desire to solve a specific telecommuting problem. There are a wide variety of other options for those seeking video and remote conferencing options. However, this does not mean that telepresence technology might not someday reach the level of sophistication imagined by Marvin Minsky in 1980. Telepresence devices could be, in the future, equipped with complex and even specialized tools such that a person commuting remotely could not only engage in meetings but could also manipulate their environment. Potential applications for this kind of technology are potentially limitless. Imagine, as Minsky did, that a telepresence system outfitted with the right tools might enable human investigators to visit and interact with a toxic or otherwise dangerous environment. Likewise, it is possible that future telepresence machines operating in the medical field could not only allow virtual visits between physicians and patients but might also allow physicians to remotely perform many different types of medical tests or procedures. Whether such a system would be embraced by the public is unclear, but there are situations in which such a system might be useful and might represent a lifesaving step in the fusion of robotics and human life.

Robotics_2.indd 213

10/2/2020 12:23:22 PM

214 | ROBOTICS & ARTIFICIAL INTELLIGENCE

CONCLUSION Robotic surgery is a present reality, but in a very limited sense, and it is unclear whether robots will play a larger role in medicine in the future. The most controversial subfield in medical robotics involves the use of robots in patient care and communications, as it is in this realm that some have expressed trepidation or resistance to the idea of patients interacting with machines. However, technology that allows remote connectivity between physicians and patients may have increasing relevance and may help to address persistent problems in the American medical system, including access to physician advice and guidance. Though there is currently no danger that human surgeons might be replaced by robotic counterparts, this is a future possibility, as robotic surgery systems advance to handle surgical procedures. As of 2020, hospitals that wish to provide surgical options must purchase surgical robots from private companies and pass on this cost to patients, meaning that robotic surgical procedures can, in some cases, be more expensive than traditional options. This poses another ethical conundrum, as it raises the question whether all patients are entitled to have access to the best possible technology for a specific procedure, or whether it is permissible for hospitals to utilize robotic surgical assistance as an advanced option for those who can afford it.

DISCUSSION QUESTIONS • Would you be comfortable undergoing a surgical procedure conducted by a robot? Why or why not? • Should all patients be able to afford robotic surgical assistance? What if robots prove to be more efficient and effective than human surgeons? • Are there certain physician functions that should not be conducted via machine? Give examples. • Will the Covid-19 pandemic change attitudes about physician use of telepresence machines? Why or why not?

Robotics_2.indd 214

10/2/2020 12:23:26 PM

Mechanical Medicine | 215 Works Used Corke, Peter. Robotics, Vision and Control: Fundamental Algorithms in MATLAB. Springer, 2011. Lafranco, Anthony R., Andres E. Castellanos, Jaydev P. Desai, and William C. Meyers. “Robotic Surgery: A Current Perspective.” Annals of Surgery. Jan. 2004, vol. 239, no. 1, pp. 14–21, doi: 10.1097/01.sla.0000103020.19595.7d. Lee, Dami. “Double’s New Telepresence Robot Now Drives You around Like You’re a Sim.” The Verge. 27 Aug. 2019, www.theverge.com/2019/8/27/20835149/double-3-telepresence-robotmixed-reality. McNamee, David. “Are Robots the Future of Surgery, or a Pricey Marketing Gimmick?” Medical News Today. 1 Aug. 2014, www.medicalnewstoday.com/articles/280518. Minsky, Marvin. “Telepresence.” Omni Magazine. June 1980, web.media.mit.edu/~minsky/ papers/Telepresence.html. “Robot Doctor Gets Thumbs-Up from Patients.” NBC News. 11 May 2004, www.nbcnews.com/ id/4946229/ns/health-health_care/t/robot-doctor-gets-thumbs-up-patients/#.XxC44C3Mw6U. Rosen, Jacob, Blake Hannaford, and Richard Satava, editors. Surgical Robotics: Systems Applications and Visions. Springer, 2011. Watanabe, Go, editor. Robotic Surgery. Springer, 2014, pp. 25–27. Warraich, Haider. “A ‘Robot’ Doctor Told a Patient He Was Dying. It Might Not Be the Last Time.” Vox. 13 Mar. 2019, www.vox.com/first-person/2019/3/13/18262481/robot-doctor-remotetelepresence-care-terminal-patient.

Robotics_2.indd 215

10/2/2020 12:23:29 PM

Robotics_2.indd 216

10/2/2020 12:23:33 PM

PLAYING ROBOT Robots as a Toy and Hobby (1993–Present)

15

Introduction Toys that look like robots have been sold since the 1940s, but in the 1980s and 1990s, robotics evolved into a hobby. Home robotics kits, DIY electronics and engineering, and groundbreaking television shows like Robot Wars helped to popularize amateur robotics for students and younger people with a penchant for electronics and computers. In the twenty-first century, many universities host robotics clubs where students apply robotics and engineering principles to build robots for fun and competition. As technology advances, corporations continue to broaden this hobby for a larger share of the population, introducing functional robotic kits that allow individuals to build their own fully functional simple robots. Topics Covered in this Chapter Include: • Home robotics • Robot derbies and competitions • University robotics organizations

This Chapter Discusses the Following Source Document: Newman, Jared, “The 1950s Toy Robot Sensation That Time Forgot,” Fast Company, December 2, 2016

Robotics_2.indd 217

10/2/2020 12:23:37 PM

218 | ROBOTICS & ARTIFICIAL INTELLIGENCE

F

rom the earliest metal robot toys to the first home robotics kits, robotics has long been a field for recreation as well as for research and commercial development. Robots first gained popularity through the science fiction shows, films, and serials of the 1940s, but, in the 1980s, as America entered the Digital Era, interest in recreational robotics began to expand. In the 1990s, local robotics clubs and associations began to emerge in cities and on the campuses of colleges and universities, enabling students and hobbyists to experiment and explore robotics and engineering in the company of like-minded individuals. Robotics has remained a popular hobby into the 2020s, while the advance of technology has enabled corporations to produce kits and other equipment to allow budding roboticists to experiment from their own home laboratories.

Playing with Tin Although American authors like Isaac Asimov and American television serials like Lost in Space popularized the idea of robots, Japanese toymakers first introduced a robot toy for children. As discussed earlier in this book, in the late nineteenth century Japanese craftspeople designed and built designer automatons to be displayed in the home. These early experiments in automation were typically not meant to look “mechanical,” though their choreographed movements were mechanically determined. It was not until American science fiction spread around the world that the idea of “robots” became global and so, too, did a new kind of robotic aesthetic. During the 1940s the archetypical retro robotic design began to come into shape, one that is now familiar to many Americans. The robots were pictured as angular machines, often with square heads and rectangular bodies. Their “hands” were often depicted as claws and their bodies outfitted with dials or other “mechanical-looking” accoutrements. As with the early automated designs of the late 1800s, the development of the retro futuristic robot design also owes a great deal to Japanese toymakers. In the wake of World War II, the Japanese economy had been decimated. Americans were divided as to how to manage the situation. Attitudes about Japan were complex. Many Americans wanted the Japanese to be punished for their role in the war and their direct attack on the United States at Pearl Harbor. However, the US government and military analysts also realized that it was necessary to bolster the Japanese economy to prevent a humanitarian crisis that might lead the nation back down the path toward militarism. To strengthen the Japanese economy, the United States changed the rules on imports from Japan so Japanese manufacturers could take a bigger role in manufacturing small items for the global market. This benefited the Japanese economy, but also America, as the cost of American

Robotics_2.indd 218

10/2/2020 12:23:41 PM

Playing Robot | 219 labor made the production of many small, mass-produced goods impractical. The same companies that once manufactured these cheaply made items were therefore able to increase profit by becoming importers and distributors rather than manufacturers. To avoid any negative economic impact related to American racism, many of the products were packaged with English language writing and featured images or illustrations of white consumers. One prime example of this is also one of the first toy robots ever manufactured, Robot Lilliput. Robot Lilliput and the simple tin-toy robots that followed were extremely cheap little machines. Designed Reproduction of Lilliput Robot, via Tin Toy Arcade. with clockwork innards that would allow the robots to walk, or more precisely “shuffle,” and sometimes with moving heads and arms as well, these simple tin toys were designed to meet the growing interest in science fiction in the United States, and to some extent throughout Western Europe. No one is certain exactly when Robot Lilliput first hit the market, but it became a popular toy. The second known robot toy can be precisely situated in history, because it was delivered as a gift to attendees of the New York Sci Fi Convention of 1950. This new toy, known as “Atomic Robot Man,” never gained widespread popularity as it was only distributed at the 1950 convention. Examples of this rare, early robot toy have since become cherished by collectors and toy history aficionados.1 There were, however, many more Japanese-made metal robot toys in the American toy market of the 1950s, and they helped to initiate what some historians have described as a robot toy “craze” that lasted through the decade. Toy robots have, in fact, never really gone out of fashion, though the design has certainly changed considerably over time. In this 2016 article from Fast Company, contributor Jared Newman discusses the 1950s robot toy craze and a popular but largely forgotten example of the robot toy industry that, for a brief time, was one of the hottest children’s toys in the nation:

Robotics_2.indd 219

10/2/2020 12:23:45 PM

220 | ROBOTICS & ARTIFICIAL INTELLIGENCE

“THE 1950S TOY ROBOT SENSATION THAT TIME FORGOT” by Jared Newman Fast Company, December 2, 2016 Source Document

Before Rosie and R2-D2 became pop culture icons, a humble toy named Robert paved the way.

In examining the history of famous robots, you’d be forgiven for overlooking a 1950s children’s toy named Robert. Robert the Robot, who was a product of the once-mighty Ideal Toy Company, didn’t do much, at least compared to the standards set by science fiction at the time. Unlike the helpful humanoids of Isaac Asimov’s I, Robot, Robert was just a 14-inchtall hunk of plastic that could utter a few phrases, wheel around with a tethered remote control, and grip objects in his mechanical arms. Still, Robert deserves credit for being the first plastic toy robot made in the United States, and the first toy robot to become an American sensation. He was the subject of children’s songs, enjoyed a Hollywood film cameo, and was quickly imitated by rival toy makers. He also preceded the industrial robotics boom by several years, capturing people’s imagination long before we truly understood what robots could do. Ta-Ta, Tin

Robert was not the first toy robot of the post-WWII era. That honor, according to Collectors Weekly, goes to Lilliput, a Japanese robot made of tin. After the war, tin lithograph toys were a popular export from American-occupied Japan, and Lilliput spawned plenty of successors.

Robotics_2.indd 220

But Robert was an American original, who according to Robotapedia was originally supposed to be a tie-in to the 1954 robot film Tobor the Great. Although Robert never appeared in the film, he received plenty of promotion on his own, debuting in the 1954 Sears Christmas catalog with an asking price of just under $6, or about $54 in 2016 dollars. Instead of tin, Ideal used plastic injection molding–a process that itself rose to prominence during the war. Plastic was cheaper than tin or cast iron, and it allowed American companies like Ideal to produce their own toys instead of importing and reselling them. “Robert was an answer to this massive amount of tin toys that were coming from Japan,” says Justin Pinchot, a Los Angeles-based collector of vintage toy robots and ray guns. “It was really the beginning of the plastic era.” That’s not to say Robert was some cheap knockoff. Cranking the handle on Robert’s back caused him to speak (“I am Robert Robot, mechanical man. Drive me and steer me, wherever you can”), while a battery inside Robert’s head allowed his eyes to light up. The cable-driven remote control was novel as well, using a hand crank for movement and a trigger for rotation. Robert also had a unique aesthetic. As Pinchot points out, no one else was doing large robots with “skirted” lower sections at the time, and the look was quickly duplicated even in Japanese robots such as the “Gang of Five,” who remain beloved by

10/2/2020 12:23:49 PM

Playing Robot | 221

continued

toy-robot collectors. “A lot of these innovations were started here and picked up by Japan, so I think Robert was one of those. I’m sure he inspired a lot of tin stuff,” Pinchot says. The Mechanical Man

How popular was Robert the Robot? While verifiable sales figures are illusive, a September 1955 story in the Portsmouth Times of Portsmouth, Ohio, claimed that Robert was Ideal’s biggest hit yet. Demand was so strong that Ideal started licensing T-shirts, flashlights, and other Robertrelated merchandise, and when Shillito’s Toyland opened in downtown Cincinnati in 1955, a 9-foottall Robert greeted customers on the sidewalk, according to the Cincinnati Enquirer. Robert never became an enduring pop culture icon despite his Hollywood origins, but he did enjoy some attention from the entertainment industry. The children’s music label Cricket Records produced a pair of songs about him, and he made a short appearance in the Douglas Sirk melodrama There’s Always Tomorrow, starring Barbara Stanwyck and Fred MacMurray.

Given all the adoration, it’s hard to say exactly why Robert the Robot gets so easily lost in robot history, though it might just be a case of planned obsolescence. While Ideal continued to produce Robert into the 1960s, the company also moved onto new ideas such as Mr. Machine, a popular robot who could be fully dissembled and reassembled. Meanwhile, competitors created their own plastic robots, such as Marx’s Robot and Son and Mr. Mercury. In an era before the perpetual cycle of media and merchandising tieins, Robert wasn’t built to last. But along with his toy robot peers, Robert may have served a higher purpose, helping to spark our fascination with new technology. As Pinchot points out, the 1950s robot obsession wasn’t only prompted by sci-fi, but by actual automation in products like washing machines, washer-dryers, electric ranges, and vehicle transmissions. Just as women were joining the work force, new technologies were automating more tasks. Robots like Robert, he says, tapped into that feeling. “If you gave a kid a toy to play with, it intimated doing things for him or her automatically,” Pinchot says. “That was where we were going, that’s where the country was going, that’s where the world was going, toward more automation and less work.”2

Build Your Own The toy robot craze inspired interest among the public in real robotics. Many children who played with wind-up robot toys carried this interest on to colleges and universities, where some pursued education in engineering and mechanics. Although there were graduate research programs in robotics in the 1950s and 1960s, it was not until the 1980s that the idea of non-professional robotics clubs and associations began to take off. Broader advancements in technology made it possible for consumers to participate in home electronics and mechanical experimentation and creation.

Robotics_2.indd 221

10/2/2020 12:23:54 PM

222 | ROBOTICS & ARTIFICIAL INTELLIGENCE Home electronics and mechanics arguably began with the radio. Introduced in the 1800s, the radio inspired many home inventors to experiment with homebuilt radios. This hobby was widespread by the 1920s, beginning a trend that continues into the modern day, often called “amateur” or “ham” radio. Amateur radio enthusiasts have been building their own transmitters and receivers for more than a century, though the hobby has lost some steam since the introduction of digital broadcasting alternatives. Still, the ham radio craze of the 1920s and 1930s is demonstrative of the interest in electronics and engineering within American and other cultures.3 In the 1970s and 1980s, the advancement of computer technology broadened the field for those interested in home electronics and engineering. One of the key developments was the microprocessor, a compact central processing unit that can be used to store and distribute basic instructions for the operation of a computer. The first modern microprocessor was introduced in 1971, but the spread of this technology (and reductions in price) that occurred in the 1980s made it more practical for those with an interest in experimenting to afford the tools and equipment needed to complete homebuilt technological projects. This meant that, by the early 1980s, advanced amateurs could actually build their own functioning home computer systems, and this became a growing hobby among a small but dedicated group. These same interests fueled the development of local computer clubs in colleges, universities, and local communities, and these social outlets helped build a community around amateur computing. This was an important component in what some historians have called “computer liberation,” the evolution of computing technology away from professional and academic realms and into popular culture. It is unlikely that many of the major advances in computing and recreational digital technology could have occurred if not for the dedicated hobbyists who first began creating their own computers and digital technology in the 1980s and 1990s.4 The history of amateur robotics is somewhat hazy as many of the early steps in this field were not recorded or occurred in local communities without much attention from the mainstream. However, advancements in computation, coupled with the reduced price of key computer components, made it possible, by the mid 1980s, for amateurs to construct working robotic devices. Colleges and universities were pioneers in cultivating this interest, through student and faculty-run robotics associations and clubs. These hobbyist organizations drew on the latest trends in academic research on the subject but also utilized robotics as a way to engender academic interest in science and engineering among their students. As the field has developed, educational institutions have continued to utilize robotics as a way to spark stu-

Robotics_2.indd 222

10/2/2020 12:23:58 PM

Playing Robot | 223 dent interest in science and to promote collaboration and cooperation. The activity of building and programming robots bridges the line between experimentation and recreational play. Children who grew up playing with robot toys or fantasizing about futuristic technology might be willing to explore these interests in a more realistic and productive way, transforming childhood interests into potential adult professions. Over the years, it has become clear that elementary-school technology programs are an effective way to engender interest in science education among students and can even help students who need remedial assistance in math and science to improve their performance. One set of experimental programs, for instance, has indicated that children’s robotics building can help generate interest in science and technology among young girls, who are underrepresented in the field and tend to lag behind young boys in terms of achievement in science, mathematics, and technology.5 Corporations soon saw the potential in amateur robotics and computer design and, in the 1990s, the first “robotics kits” were introduced by toy companies. Home robotics kits and sets have only become more popular and more complex in the years since. The field of home robotics, whether packaged as a corporate product or built from DIY (do it yourself) materials, got a major boost with the introduction of the Arduino system. Arduino is a software and hardware system first developed in Italy and now sold around the world to assist amateur electronics and engineering enthusiasts. Introduced in 2005, Arduino circuits, microcontrollers, and kits to build these devices run on an “open source” system, which means that the technical specifications for how the devices work and are built and programmed is open and free to all, rather than being classified as corporate property. Since the mid-2000s, Arduino circuits and associated systems have enabled amateur roboticists and designers to create a wide variety of complicated machines for all kinds of purposes, and Arduino has thus driven the amateur technology field perhaps more than any other single technological innovation.

Robot Sports While science-based hobbies appeal to a great number of Americans, innovative ways to use science and technology can sometimes spark a major science craze. This is what occurred in the mid-1990s and into the 2000s, when pioneering designers and engineer Marc Thorpe introduced the world to the idea of robotic sports. The phenomenon of “Robot Wars” began in San Francisco with a series of underground competitions. Amateur designers and roboticists were invited to create robots outfitted with weapons and then to compete to destroy other robots in a high-tech, nerdy version of a gladiatorial games competition. Thorpe, a graduate of University of California, Davis, helped to get fellow academic technology

Robotics_2.indd 223

10/2/2020 12:24:02 PM

224 | ROBOTICS & ARTIFICIAL INTELLIGENCE

“Robot Wars” original series Razer robot, left, (by Alex Healing) and toy version based on Sir Killalot (by Les Chatfield), via Wikimedia.

specialists involved, and the competitions took off. Performed in front of live audiences, the first annual Robot Wars competition took place in 1994 with approximately 1000 people present at San Francisco’s Fort Mason center. The second and third annual competitions each drew bigger crowds. In February 1998, the BBC picked up “Robot Wars” as a television series, and it lasted for five years and drew millions of viewers. The history of the robotic sports industry also provides an object lesson in greed and corporate misbehavior. When the phenomenon first began, some were confused as to its purpose. Thorpe saw deeper significance to the game he had created, saying in an interview: “This is intended as a business venture. I’m partners with Smile Communications and we’re serious about it being an ongoing business with longterm objectives. But there’s also a lofty objective. If the spotlight of public attention shines on this kind of activity, this kind of design and engineering will get a real shot in the arm. It will have the effect of spawning tremendous interest in engineering and mechanical design for people who are generally distanced from that forum.”6 At the beginning, battle-built robots utilized a wide variety of weapons to destroy competitor robots. Many were equipped with spinning, metal-cutting blades, knives, hammers, and other tools for smashing and/or cutting other machines. Around the second year of the com-

Robotics_2.indd 224

10/2/2020 12:24:06 PM

Playing Robot | 225 petition, in 1995, a new robotic design debuted, known as the “wedge bot,” in robot wars circles. The wedge bot is, as the name implies, shaped like a wedge and can use its body like a ramp to lift and flip other robots. Compared to hammers and blades, the wedge presented a new path to victory. Rather than smashing a competing robot to oblivion, a wedge bot simply flipped the other robots onto their backs. For those machines without a way to right themselves, this was an easy, though perceptively boring, path to victory. Wedge bots were a constant during the five years that the BBC version of “Robot Wars” was on the air. A competing show, produced by Comedy Central in the United States and called “BattleBots,” appeared in 2000 and ran to 2002. Over the course of the life of both shows, the variety of robots appearing in competition narrowed significantly. Because of the dominance of the wedge design, competitors created another kind of robot, called a “lifter,” which combined the wedge design with prongs or other lifting devices that could be used to flip a wedge. Then came spinners, which were circular wedges with blades on their bottom sides that could be used to attack lifters. Wedge-shaped robots could flip spinners, but could not often flip lifters, while lifters were good against wedges, but not against spinners, and spinners were good against lifters but not against pure wedges. “Robot Wars” pioneer Peter Abrahamson, who competed in seven seasons with his Team Sinister, described the progress and development of battle robots as a form of “robotic evolution,” such that the broad field of battle-ready bots was quickly reduced to three types. No other types could compete, because none could compete with the mechanical superiority of the three main models, lifters, spinners, and wedges. Many commentators and former competitors lost interest, describing robot wars as having essentially devolved from a competition of imaginative robotic design (and destruction) to a simple game of “rock-paper-scissors,” in which victory was essentially guaranteed for the team that had the right design for the right match. Optimization made “BattleBots” boring for many former spectators and participants, and so the brief surge of interest in the field of robotic competition soon faded. Robot competitions remained active, but largely as community or local events. However, the rise and fall of television robot battles did have a lasting impact on American and global cultures. Robotic competitions, sometimes with rules in place to prevent the domination of wedges and their variants, are still held in many communities and among students at some universities and colleges with robotics or engineering programs. The idea of creating a robot to enter a sports-style competition continues to drive interest in robotics and engineering and provides another way for those interested in robotics to connect with one another.

Robotics_2.indd 225

10/2/2020 12:24:10 PM

226 | ROBOTICS & ARTIFICIAL INTELLIGENCE

Inspiration and the Pleasure of Doing There are many kinds of DIY projects, but it is only in the Internet Age that the world has become more generally aware of the creative ways that amateurs can use equipment, technology, and other tools to create art and scientific innovations, as videos of amateur electronic creations spread through viral sharing. As robotics advances further, and functional robots become a more familiar part of popular culture, amateur robotics enthusiasts have enjoyed more ways to engage with their field and to utilize their interest in mathematics, engineering, and technology. Amateur robotics, and robotic toys, not only provide enjoyment and recreation, but represent an important phenomenon, ensuring that the evolution of technology is not just driven by corporate profit-driven calculations or by military and industrial fields, but also by public interest. Amateur robotics is more than just a hobby, therefore, it is a democratizing force that ensures public input in a field that will continue to play an increasing and potentially invasive role in everyday life.

CONCLUSION Robot toys, from the cheap, Japanese-manufactured Atomic Robot Man to modern electronic home robots, have inspired countless children around the world to take an interest in science and technology, and encouraging some to go on to make major discoveries or to pioneer developments that may fundamentally alter daily life. Likewise, the robot wars craze, while relatively short-lived, provided an outlet and international recognition for many designers and inventors, creating greater international visibility for the potential that robotic technology may hold for the future of American and global science and technology. Modern robotics clubs, whether offered through educational institutions or local organizers, continue to provide an outlet for those interested in science and collaboration, as well as giving those within these fields opportunities for connection and networking. While the home robotics phenomenon has endangered educational innovations, robotics also depends on funding and support from the world’s military establishments. While companies were offering the first home robotics toys in the 1990s, the military was making major advancements in the use of artificial intelligence, in the form of drones, that would change the nature of modern warfare.

Robotics_2.indd 226

10/2/2020 12:24:14 PM

Playing Robot | 227

DISCUSSION QUESTIONS • Why were the first robot toys introduced from Japan to the United States? • Do fictional robots and other kinds of science fiction stimulate interest in STEM subjects? Why or why not? • Is it important for more American children to take an interest in science and technology? Why or why not? • How might schools implement modern DIY technology to encourage students to take an interest in science? Works Used Atherton, Kelsey D. “A Brief History of the Demise of Battle Bots.” Popular Science. 19 June 2013, www.popsci.com/technology/article/2013-06/elaborate-history-how-wedges-ruinedbattlebots/. Campbell-Kelly, Martin. Computer: A History of the Information Machine. Routledge, 2014. Carper, Steve. Robots in American Popular Culture. McFarland & Company Inc., 2019. Maxwell, Jim. “Amateur Radio: 100 Years of Discovery.” ARRL. American Radio Relay League. Jan. 2000, www.arrl.org/files/file/About%20ARRL/Ham_Radio_100_Years.pdf. Newman, Jared. “The 1950s Toy Robot Sensation That Time Forgot.” Fast Company. 2 Dec. 2016, www.fastcompany.com/3066169/the-toy-robot-sensation-that-time-forgot. Schmitt, Kristen A. “Robotics Can Get Girls into STEM, but Some Still Need Convincing.” Smithsonian. 16 Aug. 2016, www.smithsonianmag.com/science-nature/robotics-good-girlssome-still-need-convincing-180960131/. Stone, Brad. Gearheads: The Turbulent Rise of Robotic Sports. Simon & Schuster, 2003.

Robotics_2.indd 227

10/2/2020 12:24:18 PM

Robotics_2.indd 228

10/2/2020 12:24:22 PM

WAR GAMES The Use of Robotics and AI in the Military (1994–Present)

16

Introduction Without funding from tax revenues earmarked for military projects, the state of robotics and AI in America would look much different. Each year, Americans spend more on military development and expansion than any other country, and military spending often drives innovation in consumer technology, though not intentionally. In the case of robotics and AI, much of the early funding that enabled breakthroughs in the field came from military appropriation and discretionary spending, and military spending remains an important source of funding for programs in AI and robotics in the twenty-first century. One of the most dramatic results of this spending is the development and spread of semiautonomous drones, robotic machines that can be used to surveil or to attack targets at a distance. Though drones may aid in advancing attacks, by keeping human soldiers out of harm’s way, the thousands killed in drone strikes (some unintentionally) in the twentyfirst century has raised difficult questions about the ethics of robotic warfare. Topics Covered in this Chapter Include: • Remote control technology • Military spending • Drones • US drone strikes

This Chapter Discusses the Following Source Document: Walsh, Toby, “Robots Can Outwit Us on the Virtual Battlefield, So Let’s Not Put Them in Charge of the Real Thing,” The Conversation, October 31, 2019

Robotics_2.indd 229

10/2/2020 12:24:26 PM

230 | ROBOTICS & ARTIFICIAL INTELLIGENCE

I

t is difficult to imagine what the state of US research in AI and robotics would be if not for the long-term investment of the military. Out of every $100 that Americans contribute in taxes to the federal government, between $15 and $25 goes right to the Department of Defense. America has become the world’s leading military spender. In 2019, Brookings Institute researchers estimated that as a percentage of annual budget, the United States spends more than three times what China spends on military development, and more than ten times what Russia spends.1 Investment in the military has grown over the course of the twentieth century, but it has always accounted for a large percentage of the nation’s revenue. In many cases, technology produced from military research has ended up making an impact in the consumer market. There are numerous examples of consumer technologies that were developed from discoveries made by scientists working at NASA in the 1960s, when the United States was competing with Russia in space travel and exploration. In other cases, military research has led directly to the development of new high-tech weapons and other systems that are now used by American military forces in the field.

Remote Warfare Long before the technology was available to build autonomous machines, American military leaders and developers were already envisioning how such technologies might enhance military capability. The goal was to create military systems that would eliminate the need to place soldiers directly in harm’s way. Research in this vein intensified during periods of military instability or warfare. Thus, when the United States was embroiled in the First and Second World Wars, military researchers experimented with remote weapons. During World War I, one cutting-edge idea was to create remote control explosive machines or devices that could be piloted to enemy targets. American scientists tried to invent a working system for mobile, remote explosives, but the technology never really became fully functional. Perhaps the most useful example was the French Crocodile Schneider Torpille Terrestre, or the “land crocodile,” a small wheeled machine that could be piloted by remote to take explosives directly into enemy territory. It was used briefly in 1916, but was not effective enough to introduce a robust new field of military technology.2 The remote bomb idea also thrived in the United States, where several military developers tried to create devices for the remote delivery of explosives. The best was likely the “Wickersham Land Torpedo,” developed in 1918 by American inventor Elmer Wickersham, which was impressively functional but never deployed in combat.3 During World War II, the biggest development in autonomous warfare came from Russia, where pioneering inventors created the most effective robotic tanks ever used in active

Robotics_2.indd 230

10/2/2020 12:24:29 PM

War Games | 231 warfare. Known as “teletanks,” Russia’s pioneering robotic weapons were used by Soviet forces in the now famous “Winter War,” a three-month effort to invade Finland between late 1939 and early 1940. When Russian advancement stalled due to Finnish resistance, resulting in huge losses to Soviet forces, the Soviets unveiled their secret weapon, two battalions of remote-operated T-26 light tanks. The machines were operated by hydraulic mechanisms that could be triggered by remote operators from as far as a kilometer distant. From the safety of their remote position, operators steered the tanks using buttons (joysticks were not available) and could fire mounted machine guns and flamethrowers.

TT-26 remotely controlled teletank, Karelian Isthmus, Russia, February 1940, via Wikimedia.

Robotics_2.indd 231

10/2/2020 12:24:33 PM

232 | ROBOTICS & ARTIFICIAL INTELLIGENCE

Winston Churchill (with cane) and others waiting to watch the launch of a DH.82 Queen Bee target drone in 1941. By War Office official photographer Horton (Capt), via Wikimedia.

By modern standards, the teletanks were simple, with mechanical complexity similar to what one might see in a twenty-first-century children’s toy. The teletanks had no audio or video, so pilots could not see or hear what was happening around the machines. They relied, instead, on guidance from soldiers with binoculars who attempted to help steer from a distance. If a teletank wandered out of radio range, the vehicle had no automated shutoff or security measures: it would simply stop, engine idling, until it received a signal from the transmitter again. Essentially, the teletank was a blind berserker on the field of battle, spraying fire and bullets indiscriminately. Though effective in the Winter War, they were no match for German anti-tank tactics and so the design was effectively retired after the invasion of Finland, marking both the first and last time that remote-controlled tanks were used in battle.4

Battlefield Predation World War II introduced autonomous weapons into global warfare, but it was really in the wake of the war that the technology began to develop. One of the stepping-stones toward

Robotics_2.indd 232

10/2/2020 12:24:37 PM

War Games | 233

BQ-17 Flying Fortress drones over New Mexico, April 1946. By United States Army Air Forces, Scott Thompson, via Wikimedia.

true drone technology was the De Havilland DH.82B “Queen Bee,” a cheap remote-controlled aircraft designed by the British military to enhance training for combat pilots. Pilots used the remote aircraft to attack and evade attacks, thus providing a more challenging training exercise. The United States was also active in this field. In 1946, the US military conducted a now famous test flight in which a B-17 Flying Fortress, specially outfitted to be flown by remote, was flown from Hilo Naval Air Station in Hawaii to the Muroc Army Airfield in California. This record-breaking remote flight proved long-distance remote piloting was possible and might be an effective technology for future military designs.5 It was also during this time that the term “drone” came into general use for remote vehicles, as well as the more official-sounding term UAVs, which stands for unmanned aerial vehicle. During the Vietnam Conflict, the United States used a number of remote-controlled aircraft for reconnaissance and as decoys or for distributing propaganda over Vietnamese cities. These early drones played a minor role in the conflict, but the process of researching and building new remote vehicles advanced the science of remote operation. Further steps forward came in the 1960s and 1970s, notably through US investment in the Lockheed MGM-

Robotics_2.indd 233

10/2/2020 12:24:42 PM

234 | ROBOTICS & ARTIFICIAL INTELLIGENCE

A simulated Navy aerial reconnaissance flight off the coast of California in 1995 using a Predator drone. By Petty Officer 3rd Class Jeffrey S. Viano, US Navy, via Wikimedia.

105 Aquila program, which promised to deliver small propeller-powered drones that could be used to spy on enemy positions. The technology might have been promising, but internal disagreement within the military establishment doomed the project. With requests for the engineers to add a growing list of capabilities, the budget ballooned from an expected $560 million to over a billion, which resulted in a small number of prototype machines, and the project was ultimately cancelled without significant gains. Historian P.W. Singer has said that 1995 was the “magic moment” in the history of UAVs, as military designers made a major leap forward by integrating remote piloting technology with data from the global positioning system (GPS). With the GPS system enabling precise navigation anywhere in the world, engineers were finally able to create drones with the kind of functionality that designers in World War II could only imagine. By the 1990s, advances in camera and microphone technology also greatly improved drones’ ability to obtain and deliver information. The first and most influential prototype in this new generation of drones was the General Atomics RQ-1 Predator.6 The Predator series was perhaps the most influential and innovative US creation in the field of UAVs. Introduced in the 1990s, the Predator drone system was not officially re-

Robotics_2.indd 234

10/2/2020 12:24:46 PM

War Games | 235 tired until 2018. Over the course of more than 20 years, the Predator drone family evolved along with advancements in digital and weapons technology. The very first predator was used in Central Intelligence Agency (CIA) operations during the Bosnian War, and it remained, primarily, a tool for CIA operations in the years that followed. During the US invasion of Iraq and Afghanistan, Predator drones played a major role. Predator operators at Bald Camp Anaconda, Iraq, in 2007. By Master Sergeant Small enough to be piloted into Steve Horton, United States Air Force, via Wikimedia. remote locations without drawing too much attention, while also outfitted with a deadly payload, the Predator carried out American attacks that would likely have been considered too dangerous for human soldiers. The US military invested billions to develop new and better models. Although useful for a number of surveillance and covert missile applications, as of 2020 drones like the Predator and its more modern relatives are not able to compete with traditional piloted aircraft when it comes to aerial combat. Drones have been consigned to covert operations and have not yet become a fully functional combat technology.7

Warfare in the Digital Age How much have drones changed the nature of warfare? This is a question that largely depends on one’s frame of reference. Prior to the development of UAVs, the US military was already using long-distance missiles that allowed for striking targets from a distance and, in some ways, drones have simply added to this ability. However, in terms of the capability for surveillance and small-scale military strikes, drones have been transformative. Drones have primarily been used against radical political groups in Africa and the Middle East. These poorly armed, makeshift militias have little in the way of military capabilities. This has proven to be a strength, in some cases, as militia members have been able to utilize guerilla tactics to survive even when facing an enemy (the United States) with superior military capabilities. Since drones came into use during the “Global War on Terror,” the Pentagon has engaged in propagandizing by underestimating both the number of drone strikes that have been con-

Robotics_2.indd 235

10/2/2020 12:24:51 PM

236 | ROBOTICS & ARTIFICIAL INTELLIGENCE ducted and the number of casualties resulting from these attacks. Most drone strikes have been conducted by the CIA. Claiming national security considerations, the CIA has been reluctant to release accurate data on drone use. Journalists have attempted to counter this by using local reporting in areas where drone strikes have been used to gain a more accurate view. The Bureau of Investigative Journalism, for instance, estimates that the United States conducted at least 14,040 drone strikes between 2004 and 2020, just in the nations of Pakistan, Afghanistan, Yemen, and Somalia. These drone strikes have resulted in the deaths of between 8,800 and 16,000 people. Of these, between 900 and 2,200 are believed to be civilians, including 280–450 children. If the figures provided by investigative journalists are correct, drone warfare might be considered a serious and growing human rights concern, and the use of drones might constitute a serious breach of international conventions on warfare.8 As of 2020, UAVs are still overseen and piloted by human operators, but emerging technology makes it possible for drones to operate on an entirely autonomous basis. Does the use of fully autonomous weapons further exacerbate the potential ethical and moral issues that have been raised by the use of drones? A host of the world’s leading technology executives and other figures have argued that autonomous weapons do raise serious concerns. In 2017, 116 figures from the tech industry issued an open letter to the world’s governments calling for an outright ban on autonomous weapons: “Once developed, lethal autonomous weapons will permit armed conflict to be fought at a scale greater than ever, and at timescales faster than humans can comprehend. These can be weapons of terror, weapons that despots and terrorists use against innocent populations, and weapons hacked to behave in undesirable ways.”9 The warnings issued by Elon Musk and other tech-industry leaders did not deter the world’s governments and, as of 2020, a number of the world’s technological powers are in the process of developing fully autonomous drones and other robots capable of locating, attacking, and killing without human supervision. The potential for this technology has been enhanced by the development of advanced artificial intelligence strategy systems that could be utilized to plan and execute military activities. Writing in The Conversation in 2019, AI researcher Toby Walsh argued against the use of autonomous weapons in combat.

Robotics_2.indd 236

10/2/2020 12:24:56 PM

War Games | 237

“ROBOTS CAN OUTWIT US ON THE VIRTUAL BATTLEFIELD, SO LET’S NOT PUT THEM IN CHARGE OF THE REAL THING” by Toby Walsh The Conversation, October 31, 2019 Source Document

Artificial intelligence developer DeepMind has just announced its latest milestone: a bot called AlphaStar that plays the popular real-time strategy game StarCraft II at Grandmaster level.

Without wishing to take anything away from an impressive feat of collaborative engineering, if you throw enough resources at a problem, success is all but assured.

This isn’t the first time a bot has outplayed humans in a strategy war game. In 1981, a program called Eurisko, developed by artificial intelligence (AI) pioneer Doug Lenat, won the US championship of Traveller, a highly complex strategy war game in which players design a fleet of 100 ships. Eurisko was consequently made an honorary Admiral in the Traveller navy.

Unlike recent successes with computer chess and Go, AlphaStar didn’t learn to outwit humans simply by playing against itself. Rather, it learned by imitating the best bits from nearly a million games played by top-ranked human players.

The following year, the tournament rules were overhauled in an attempt to thwart computers. But Eurisko triumphed for a second successive year. With officials threatening to abolish the tournament if a computer won again, Lenat retired his program. DeepMind’s PR department would have you believe that StarCraft “has emerged by consensus as the next grand challenge (in computer games)” and “has been a grand challenge for AI researchers for over 15 years”. In the most recent StarCraft computer game tournament, only four entries came from academic or industrial research labs. The nine other bots involved were written by lone individuals outside the mainstream of AI research. In fact, the 42 authors of DeepMind’s paper, published today in Nature, greatly outnumber the rest of the world building bots for StarCraft.

Robotics_2.indd 237

Without this input, AlphaStar was beaten convincingly by 19 out of 20 human players on the StarCraft game server. AlphaStar also played anonymously on that server so that humans couldn’t exploit any weaknesses that might have been uncovered in earlier games. AlphaStar did beat Grzegorz “MaNa” Komincz, one of the world’s top professional StarCraft players, in December last year. But this was a version of AlphaStar with much faster reflexes than any human, and unlimited vision of the playing board (unlike human players who can only see a portion of it at any one time). This was hardly a level playing field. Nevertheless, StarCraft does have some features that makes AlphaStar an impressive advance, if not truly a breakthrough. Unlike chess or Go, players in StarCraft have imperfect information about the state of play, and the set of possible actions you can make at any point is much larger. And StarCraft unfolds in real time and requires longterm planning.

10/2/2020 12:25:00 PM

238 | ROBOTICS & ARTIFICIAL INTELLIGENCE

“Robots Can Outwit Us on the Virtual Battlefield, so Let’s Not Put Them in Charge of the Real Thing” continued Robot Wars

This raises the question of whether, in the future, we will see robots not just fighting wars but planning them too. Actually, we already have both. Despite the many warnings raised by AI researchers such as myself—as well as by founders of AI and robotics companies, Nobel Peace Laureates, and church leaders—fully autonomous weapons, also known as “killer robots”, have been developed and will soon be used. In 2020, Turkey will deploy kamikaze drones on its border with Syria. These drones will use computer vision to identify, track and kill people without human intervention. This is a terrible development. Computers do not have the moral capability to decide who lives or dies. They have neither empathy nor compassion. “Killer robots” will change the very nature of conflict for the worse. As for “robot generals”, computers have been helping generals plan war for decades. In Desert Storm, during the Gulf War of the early 1990s, AI scheduling tools were used to plan the buildup of forces in the Middle East prior to conflict. A US general told me shortly afterwards that the amount of money saved by doing this was

Robotics_2.indd 238

equivalent to everything that had been spent on AI research until then. Computers have also been used extensively by generals to war-game potential strategies. But just as we wouldn’t entrust all battlefield decisions to a single soldier, handing over the full responsibilities of a general to a computer would be a step too far. Machines cannot be held accountable for their decisions. Only humans can be. This is a cornerstone of international humanitarian law. Nevertheless, to cut through the fog of war and deal with the vast amount of information flowing back from the front, generals will increasingly rely on computer support in their decision-making. If this results in fewer civilian deaths, less friendly fire, and more respect for international humanitarian law, we should welcome such computer assistance. But the buck needs to stop with humans, not machines. Here’s a final question to ponder. If tech companies like Google really don’t want us to worry about computers taking over, why are they building bots to win virtual wars rather than concentrating on, say, more peaceful e-sports? With all due respect to sports fans, the stakes would be much lower.10

10/2/2020 12:25:04 PM

War Games | 239

Barriers to War In his speech upon winning the Nobel Peace Prize, former President Jimmy Carter stated, “War may sometimes be a necessary evil. But no matter how necessary, it is always an evil, never a good. We will not learn how to live together in peace by killing each other’s children.”11 Over the course of world history, combatants have sought ways to keep themselves and their allies safe, while still being able to strike out against their enemies. The development of the gun was one major step in this direction, enabling individuals to kill at a distance. Missiles and artillery were another step, enabling armies to kill enemies at great distances, and reducing the potential for injury. Ultimately, however, warfare has always been conducted by humans, to whom warfare is as much an emotional matter as it is a political or practical one. Military institutions and politicians often try to erase this human angle. For instance, soldiers are often trained to view their enemies as inferior, morally flawed, or inhuman. This psychological technique, called “dehumanization,” is intended to make it easier for a soldier to kill an enemy combatant without suffering from psychic dissonance. Dehumanization has proven a powerful technique, but it is impossible to erase the humanistic impulses present in all people, and empathy does play a role in warfare. Consider the now famous story of Christmas Day in 1914, when German and British forces fighting in the trenches of Europe momentarily put aside their animus to share Christmas drinks and even exchange presents. The human element acts against the violent impulses of humanity, and it is the capacity for empathy that prevents warfare and violence as tensions escalate. But what happens when the human element is removed? By replacing trench warfare with guided missiles, and now with UAVs guided from thousands of miles away to strike at pixelated images on a screen, warfare has already evolved to minimize its emotional impact. The United States has been covertly attacking and killing individuals in the Middle East for nearly 20 years, resulting in the deaths of thousands of civilians and children and yet, both soldiers and American citizens are sheltered from the impact of these decisions. As of 2020, it is now possible to remove the human element completely. With AI-guided programming and automated weaponry, the choice of whether to kill or to spare a life can now be left entirely to machines. Unlike human decisions, which are always made from both intellect and emotion, a machine’s decisions are guided by absolute directives and dispassionate calculations of cost and benefit. As Americans and the rest of the world, move forward into

Robotics_2.indd 239

10/2/2020 12:25:09 PM

240 | ROBOTICS & ARTIFICIAL INTELLIGENCE this new era of warfare, the citizens of the world must make crucial decisions about how AI and robotics will be used and ultimately must decide whether the power over life and death is something that can be given to machines or whether this must remain in human hands and minds.

CONCLUSION The use of robotics in military operations is among today’s most controversial technological topics. Drones have killed tens of thousands of people around the world, including civilians. Given this cost, some feel that autonomous war machines should be abandoned, while others want better and stronger regulations over military use of such technology. Though many disapprove of the military’s use of AI and robotics, it is likely that many of the non-military robotic and AI accomplishments of the 2020s were created through research funded and produced by the US military.

DISCUSSION QUESTIONS • Should the United States utilize drones? Why or why not? • What are the possible moral problems that arise with the use of robotics and AI in warfare? Use examples from the text. • How does the use of drones reflect the history of military technology development? • How might the use of drones intensify conflict? Works Used Carter, Jimmy. “Nobel Lecture.” 10 Dec. 2002, The Nobel Prize, www.nobelprize.org/prizes/ peace/2002/carter/lecture/. Connor, Roger. “The Predator, a Drone That Transformed Military Combat.” Smithsonian. 9 Mar. 2018, airandspace.si.edu/stories/editorial/predator-drone-transformed-military-combat. “Drone Warfare.” The Bureau Investigates. The Bureau of Investigative Journalism, www. thebureauinvestigates.com/projects/drone-war.

Robotics_2.indd 240

10/2/2020 12:25:13 PM

War Games | 241 Everett, H.R. “A Brief Early History of Unmanned Systems.” Mechanix Illustrated. 2016. mechanixillustrated.technicacuriosa.com/2017/02/19/a-brief-early-history-of-unmannedsystems/. Everett, H.R. Unmanned Systems of World War I and II. The MIT P, 2015. Gibbs, Samuel. “Elon Musk Leads 116 Experts Calling for Outright Ban of Killer Robots.” The Guardian. 20 Aug. 2017, www.theguardian.com/technology/2017/aug/20/elon-musk-killerrobots-experts-outright-ban-lethal-autonomous-weapons-war. O’Hanlon, Michael E. “Is US Defense Spending Too High, Too Low, or Just Right?” Brookings Institute. 15 Oct. 2019, www.brookings.edu/policy2020/votervital/is-us-defense-spending-toohigh-too-low-or-just-right/. Saxena, V. K. The Amazing Growth and Journey of UAVs & Ballistic Missiles Defence Capabilities. Vij Books India, 2013. Singer, P.W. “Drones Don’t Die—A History of Military Robotics.” History Net, www.historynet. com/drones-dont-die-a-history-of-military-robotics.htm. Sofge, Erik. “Tale of the Teletank: The Brief Rise and Long Fall of Russia’s Military Robots.” Popular Science. 7 Mar. 2014, www.popsci.com/blog-network/zero-moment/tale-teletankbrief-rise-and-long-fall-russia%E2%80%99s-military-robots/. Walsh, Toby. “Robots Can Outwit Us on the Virtual Battlefield, so Let’s Not Put Them in Charge of the Real Thing.” The Conversation. 31 Oct. 2019, theconversation.com/robots-can-outwitus-on-the-virtual-battlefield-so-lets-not-put-them-in-charge-of-the-real-thing-126104.

Robotics_2.indd 241

10/2/2020 12:25:17 PM

Robotics_2.indd 242

10/2/2020 12:25:21 PM

SPACE ODDITIES Robotics in Space (1966–Present)

17

Introduction Many of the researchers and engineers who created the first robots to survey hazardous landscapes were also working toward a more distant goal: creating robots to assist in the dangerous and difficult process of exploring space. Robotics have been an important facet of space exploration technology since the 1960s, and automated vehicles were seen as an important part of safety preparations leading to the first moon landings. Both American and Soviet scientists deployed automated tools onto the lunar surface. The use of robots in space gained more widespread attention with the deployment of robotic explorers on Mars. These included the Pathfinder robots of the 1990s and the Mars rovers of the 2000s and 2010s. These machines, designed to blend “cute” robot design with cuttingedge functionality, provided data on the Martian landscape that fascinated Americans and observers around the world. The end of the Martian rover project in 2018 and the Internet myths surrounding this event also demonstrate persistent human prejudices and misunderstandings about robotics. Topics Covered in this Chapter Include: • NASA • Lunar Surveyor • The Voyager missions • Pathfinder • Sojourner • The Mars rovers • The Dawn spacecraft

This Chapter Discusses the Following Source Document: Evon, Dan, “Was a Mars Rover’s Final Message to NASA ‘My Battery Is Low and It’s Getting Dark?’” Snopes, February 18, 2019

Robotics_2.indd 243

10/2/2020 12:25:24 PM

244 | ROBOTICS & ARTIFICIAL INTELLIGENCE

I

n fictional visions of the future, robots are often depicted playing many roles. Since “Robby the Robot” was created for the 1956 film Forbidden Planet, science fiction from around the world has regularly included robots when authors and storytellers imagine distant human life in space. In science fiction, futuristic robots are often depicted as human “helpers,” accompanying and assisting humans who live and work in space. The exploration of space, however, has moved ahead largely without humans because of the dangers involved in space exploration. Robots, by contrast, can be purpose-built to survive the rigors of space and other planets. They can utilize the latest tools and technology and be protected from even extreme climatic situations far more easily than scientists can equip a human to survive, much less complete any tasks, in these extreme environments. The history of robotics in space shows how scientists gradually progressed from autonomous or semi-autonomous probes to utilizing robots as independent explorers of extraterrestrial landscapes.

Robots Take a Look at the Solar System In the 1960s, Americans were more excited about space than at any time since. To a large degree, this was because of fear of Soviet powers, but it was also a result of the fact that President John F. Kennedy, one of the most popular presidents in history, asked Americans to get on board with the idea that they needed to take a leading role in exploring space. These dual motivators created an unusual alignment in American attitudes. Those motivated by fear embraced the cause of American space science because they believed they perceived a threat from the Soviet designs on space; American space science was, therefore, lumped in with massive overall increases in military spending. However, even those who did not believe in the Soviet threat were interested in an imagined egalitarian future of global cooperation and so were able to embrace America’s race to space. This was largely because of masterful public speakers like Kennedy shaping American conceptions of the US effort to explore space. In his now famous 1962 speech at Rice University, Kennedy appealed to both progressives and conservatives by delivering a dual message of hope and global peace, tinged with the threat of foreign encroachment and dominance: Those who came before us made certain that this country rode the first waves of the industrial revolutions, the first waves of modern invention, and the first wave of nuclear power, and this generation does not intend to founder in the backwash of the coming age of space. We mean to be a part of it—we mean to lead it. For the eyes of the world now look into space, to the moon and to the

Robotics_2.indd 244

10/2/2020 12:25:28 PM

Space Oddities | 245 planets beyond, and we have vowed that we shall not see it governed by a hostile flag of conquest, but by a banner of freedom and peace. We have vowed that we shall not see space filled with weapons of mass destruction, but with instruments of knowledge and understanding. Yet the vows of this Nation can only be fulfilled if we in this Nation are first, and, therefore, we intend to be first. In short, our leadership in science and in industry, our hopes for peace and security, our obligations to ourselves as well as others, all require us to make this effort, to solve these mysteries, to solve them for the good of all men, and to become the world’s leading space-faring nation. We set sail on this new sea because there is new knowledge to be gained, and new rights to be won, and they must be won and used for the progress of all people. For space science, like nuclear science and all technology, has no conscience of its own. Whether it will become a force for good or ill depends on man, and only if the United States occupies a position of pre-eminence can we help decide whether this new ocean will be a sea of peace or a new terrifying theater of war. I do not say the [sic] we should or will go unprotected against the hostile misuse of space any more than we go unprotected against the hostile use of land or sea, but I do say that space can be explored and mastered without feeding the fires of war, without repeating the mistakes that man has made in extending his writ around this globe of ours. There is no strife, no prejudice, no national conflict in outer space as yet. Its hazards are hostile to us all. Its conquest deserves the best of all mankind, and its opportunity for peaceful cooperation many never come again. But why, some say, the moon? Why choose this as our goal? And they may well ask why climb the highest mountain? Why, 35 years ago, fly the Atlantic? Why does Rice play Texas?1 By speaking to the fears and hopes of people across America, Kennedy demonstrated the true power of the presidency: persuasion. With public support peaking, Congress and the executive branch were able to direct tremendous resources toward space technology. However, the effort to explore space meant confronting numerous challenges, not least of which was the fact that space is an incredibly hostile environment. While most Americans may understand that space is a vacuum and has no atmosphere and it is impossible to survive in space without a source of breathable air, many may not fully

Robotics_2.indd 245

10/2/2020 12:25:32 PM

246 | ROBOTICS & ARTIFICIAL INTELLIGENCE understand how dangerous space is. Take, for instance, the fact that the vacuum creates massively variable temperatures. If an astronaut is in space facing the sun, the difference in temperature on the front of the astronaut’s front versus his back can be as much as 275 degrees Fahrenheit. Further, many may not be aware that space is filled with deadly radiation.2 First there are “galactic particles” from the Milky Way and beyond that can be very high energy and potentially fatal. In 1991, a cosmic ray detector operated out of the University of Utah discovered a proton moving with such energy that it was equivalent to a baseball traveling at 55 mph.3 Further, space is filled with “solar wind,” radiation particles expelled by the sun. Radiation poisoning is just one of the reasons that NASA classifies space as a “hostile work environment”; other dangers include carbon dioxide poisoning, decompression sickness, bone injuries from gravity changes, and intercranial hypertension. In short, space is not really an appropriate environment for a human and just to be there for even a few minutes requires hundreds of thousands in investment and extreme safety measures. The architects of the “Space Race,” the name used for the largely symbolic competition between the United States and Russia to “dominate” space, were also keenly aware that disasters, especially those involving loss of human life, were likely to pose a great setback to the advancement of space technology. Many politicians and members of the public were critical of the establishment of the National Aeronautics and Space Administration (NASA) and of the overarching goal of exploring space. Some objected because of the cost and others because of the difficulties involved. Because the Space Race had evolved into a high-profile effort, all eyes were on NASA, and every failure was examined and stimulated new calls to abandon the expensive and dangerous effort. Events like the training jet crash that killed astronaut Theodore Freeman in 1964, or the 1966 training jet crash that killed astronauts Elliot See and Charles Bassett, or the fire that killed Apollo 1 astronauts Virgil Grissom, Ed White, and Roger Chaffee were not only tragic losses for the program but also threatened America’s broader commitment to space exploration. In the lead-up to landing astronauts on the lunar surface, therefore, NASA called upon the expertise of roboticists to help create unmanned probes that could make trial runs and provide data before human missions were attempted. The first US spacecraft to land on the surface of the moon did not contain astronauts; they were early robotic systems rigged for semi-autonomous flight containing equipment to study the lunar surface. Under the direction of Homer Newell (1915–1983), now considered one of the pioneers of US space science, NASA constructed and successfully utilized three robotic craft—Ranger, Surveyor, and the Lunar Orbiter—to study the moon.

Robotics_2.indd 246

10/2/2020 12:25:36 PM

Space Oddities | 247

The first image of the lunar surface sent back by Ranger 7, via NASA/JPL.

Project Ranger was the first robotic vehicle to visit the moon. Construction on the Ranger probes began in 1959, and they were designed to carry both scientific instruments and television cameras. Interestingly, a wide assortment of Ranger craft were tested and launched before NASA succeeded in reaching the moon. Several rangers were destroyed, and others missed their targets. In total, the first six Ranger flights failed. These failures were so expensive and controversial, in fact, that they led to a political reorganization of NASA leadership. Their first success came in 1964, when Ranger 7 finally transmitted images back to Earth right before crashing into the lunar surface.4

Robotics_2.indd 247

10/2/2020 12:25:40 PM

248 | ROBOTICS & ARTIFICIAL INTELLIGENCE

Surveyor 1 launched in May 1966 and landed in June 1966, the first robot to successfully land on the moon’s surface, transmitting 11,000 images as well as information on lunar soil, reflectivity, and temperature. Above, Surveyor 1’s shadow on the lunar surface. Via NASA/JPL.

From 1966 to 1968, NASA entered a new phase in robotic lunar science, launching seven versions of a new craft known as Surveyors. Unlike the Ranger craft, which were essentially intended to take some pictures and then crash into the surface of the moon, the Surveyors

Robotics_2.indd 248

10/2/2020 12:25:44 PM

Space Oddities | 249 were designed to land on the surface to prove that the overall goal of landing on the moon was possible. Like the Rangers, the Surveyor probes were also outfitted with television cameras and transmitters to provide scientists and the public with additional images of the lunar surface. The third through seventh Surveyor spacecraft also had a robotic arm, a technology just beginning to enter the industrial market, to collect and sample lunar soil and to examine the structure of the lunar surface. The Surveyors also demonstrated the technology needed for an automated landing procedure and, in total, five of the seven surveyor crafts sent to the moon successfully landed. At the same time that NASA was testing the process of landing on the moon with the Surveyor spacecraft, another series of craft were being sent into lunar orbit in order to send back more detailed photographic documentation of the moon’s surface and topography. There were five Lunar Orbiter missions in total, all successful. The first three were sent into orbit to collect data on possible landing sites on the moon’s surface, while the last two were sent to collect scientific data on areas of the moon outside of the Apollo mission’s landing parameters. One interesting thing about the Lunar Orbiter program was how scientists and engineers advanced photographic technology. Each Lunar Orbiter was equipped with a complete photo processing laboratory, the first automated film processing system. The orbiters took close-up and wide-angle shots of

Robotics_2.indd 249

Restored detail of the first image of Earth from the moon taken by Lunar Orbiter 1. By NASA/LOIRP, via Wikimedia.

10/2/2020 12:25:48 PM

250 | ROBOTICS & ARTIFICIAL INTELLIGENCE

NASA space probe Voyager 2, launched in 1977, is the only spacecraft to have ever visited Uranus and Neptune, and reached interstellar space in 2018. Voyager 2 has visited all four gas giant planets and discovered and photographed many of their moons. Via NASA/JPL.

the moon on 70-millimeter film, which could then be developed, scanned, and converted to what were essentially digital signals for transmission back to Earth. All told, the Lunar Orbiters photographed 95 percent of the moon’s surface with sufficient resolution to make detailed maps.5 Before the first person set foot on the moon, NASA had already completed much of the rigorous scientific investigation that would occur during the Apollo moon exploration era. In the years that followed, much of the actual exploration of space would be completed by autonomous or semi-autonomous craft. Other famous examples include the Voyager 1 and Voyager 2 probes, which were launched by NASA to explore the solar system. Both Voyager probes were launched in 1977 from NASA’s launch site in Cape Canaveral, Florida. The spacecraft delivered groundbreaking photography and data from Jupiter and Saturn and from the larger moons of the planets. Though only designed to function for five years, the Voyager probes proved even more effective than NASA engineers had planned, and it was eventually possible to use those semi-autonomous craft to survey Uranus and Neptune.6

Robotics_2.indd 250

10/2/2020 12:25:52 PM

Space Oddities | 251 The Voyager program also made a number of other breakthroughs in automation. As the probes spent years traveling out toward the end of the solar system, scientists back on Earth learned how to upgrade the probes by sending remote reprogramming instructions from Earth. These remote upgrades provided the Voyager craft with far greater capabilities than their initial programming and essentially enabled NASA to turn what had been a 5-year mission into a nearly 40-year program exploring the solar system. Much of what is known about the surfaces of many of the solar system’s planets came from the use of these semi-robotic and independent probes, which provided the raw data needed for scientists to evaluate the chemical and physical properties of the planets. In 2012, Voyager 1 officially left the solar system and crossed the “heliosheath,” which is the outer region of space affected by the sun’s gravimetric pull. Voyager 2 followed in 2018.7 The Voyager crafts were also equipped with what are known as “Golden Records,” specially designed gold-plated copper disks that contain recordings of sounds and images from Earth, designed to portray a small sample of the diversity of life on the planet. The Golden Records were designed by a committee chaired by famed American physicist Carl Sagan, who is arguably one of the most important figures in American space science history and was one of the most important ambassadors for the space program to the general public. Sagan’s committee chose 115 images, coupled with natural sounds such as waves hitting the shore, rain, thunder, birds, and whale song. Music from different cultures and eras were also included, in addition to recorded greetings in fifty-five languages and a printed message from President Jimmy Carter.

Robotics_2.indd 251

10/2/2020 12:25:56 PM

252 | ROBOTICS & ARTIFICIAL INTELLIGENCE

Symbolic Messages In the late 1970s, when NASA was preparing to launch its now-famous Voyager probes into space, a committee led by American space scientist Carl Sagan decided to include, within the body of the two Voyager probes, a little taste of life on Earth. This took the form of what are called “Golden Records,” phonograph disks containing recordings of sounds and images from Earth. Each is contained within a gold-plated metal sheath to ensure that the surfaces of the disks are not damaged. During the design process, Sagan and his team faced an interesting challenge. How does one communicate information to an alien species about the contents on the disk and how to access it? Given that aliens would presumably have their own unique language, one that may or may not have any resemblance to human languages, it was determined that the message would need to be visual. Instead of words providing instructions, the scientists created images meant to transmit various kinds of information about the Golden Record. The upper left of the design contains a drawing of a phonograph and of the included stylus that can be used to “play” the record, as well as a visual demonstration of where to place the stylus to play the record from the beginning. Around the edge of the image of the record is a mathematical formula written in binary symbols. These symbols are meant to inform the user that one rotation of the record is supposed to take 3.6 seconds. This is intended to provide data that an alien race might use to activate the record at the proper speed. In the right-hand corner, a series of images attempt to convey that there are also visual messages that can be created from the signals provided on the record. The image provides a visual example of what a viewer would see when beginning to decode the pictures, a series of vertical lines that would appear at the beginning of the visual message. After this, more numerical data is provided, again in binary numbers, to inform extraterrestrials that the duration of one of these “picture lines” is 8 milliseconds. Another image then attempts to explain the picture “raster” so as to allow viewers to correctly adjust the image. In the lower-left corner of the cover is a “pulsar map” that shows the location of the solar system in relation to 13 pulsars, which are rotating neutron stars that give off measurable and regular pulses of energy. Because pulsars are regular in their release of energy, they provide an excellent way to provide information about galactic location. Finally, in the lower right-hand corner is a drawing of a hydrogen atom, in its two lowest states, connected by lines meant to indicate the amount of time that one hydrogen atom takes to transform from one state to another. The covers were also outfitted with tiny pieces of uranium-238, which provide a steady uranium decay and can, therefore, represent

Robotics_2.indd 252

10/2/2020 12:26:00 PM

Space Oddities | 253

Symbolic Messages continued a basic clock. By examining this part of the cover, an extraterrestrial would potentially be able to determine how long the object has traveled as well as an approximate distance one would need to traverse to reach Earth.a Works Used a. “The Golden Record Cover.” NASA. JPL. Jet Propulsion Laboratory, voyager.jpl.nasa.gov/golden-record/golden-record-cover/.

The Golden Record cover with instructions for extraterrestrials, via NASA/JPL.

Robotics_2.indd 253

10/2/2020 12:26:06 PM

254 | ROBOTICS & ARTIFICIAL INTELLIGENCE

The Red Planet Unveiled Once the moon had been explored, the next goal for NASA and space scientists in many other parts of the world was Mars. For many years, people imagined that Mars might contain life. There were numerous reasons for this, but the dream of “Martians” was largely a matter of proximity. Mars is comparatively close to the Earth and so astronomers, from the early era of astronomy, have been able to get a better view of Mars than of most objects in the solar system. This is the same reason that, for many centuries, people imagined that there might also be life on the moon, a theory that really did not go out of fashion entirely until humans had been there and revealed that it was, in fact, a lifeless chunk of space rock. Mars is also likely a lifeless chunk of space rock, though larger and more chemically complex than the moon. NASA scientists knew this well before it was possible to land a craft on the Martian surface, but there was then, and for some remains, the hope that Mars might harbor some kind of life, even if only in the form of microbes. The proximity of Mars also means that it is simply more practical to mount an exploration of the Red Planet than of any other planet in the solar system. Though NASA ultimately hoped to land humans on the Martian surface, that goal remains too dangerous and costly. A more achievable goal was to utilize robotic technology, similar to that used to complete the early studies of the lunar surface, to advance our understanding of Mars and to prepare for a potential future effort to land humans there. As far back as the 1970s, scientists were working on designs for automated explorers that would be capable of traveling the surface of Mars and collecting data. However, NASA scientists did not want to risk their most advanced robots on their first attempt to land on the Martian surface. The solution was to design and launch a more basic system, and the robotic probe Mars Pathfinder was launched on December 4, 1996, and landed on the portion of Mars known as Ares Vallis on July 4, 1997. The Pathfinder lander also contained a 23-pound rover named Sojourner, in honor of civil rights pioneer Sojourner Truth. This cutting-edge exoplanet exploration robot was equipped with a battery of scientific instruments, many of which were included to test various sampling and other technologies that NASA scientists hoped to incorporate into more advanced future robotic systems. The system used to land the Pathfinder, and its Sojourner rover, was an innovation in space landing technology. Rather than using controlled thrusters to manage reentry and landing, NASA scientists determined that it would be easier and less problematic to essentially create a controlled plummet onto the Martian surface. The spacecraft was

Robotics_2.indd 254

10/2/2020 12:26:11 PM

Space Oddities | 255

The Mars Pathfinder landed on Mars in 1997 with roving probe Sojourner. Originally sent as a technology demonstration to prove a rover could be landed on Mars, the mission outlived its primary design life and sent back an unprecedented amount of data. Above, the Sojourner investigates the surface of the rock “Yogi.” Via NASA/JPL.

therefore outfitted with a parachute and airbags covering the craft’s surface to reduce the impact of landing.8 After its successful landing, the Pathfinder module delivered 2.3 billion bits of data back to NASA, including 16,500 images, from its three cameras. In addition, the Pathfinder was able to collect and deliver 15 chemical analyses utilizing an Alpha Proton X-ray Spectrometer attached to the machine. Though the machine was given a conservatively short window of activity, the Sojourner rover lasted 12 times longer than NASA scientists expected, delivering far more data than initially expected and greatly advancing knowledge of the Martian atmosphere, geology, and topography. It was the Pathfinder mission that uncovered conclusive evidence proving that the surface of Mars was once covered with liquid, running water.9 While space aficionados followed the progress and discoveries made by the Sojourner and Pathfinder in great numbers, it was not until the launch of the Exploration Rovers, in 2003, that NASA’s robotics program gained widespread pop-cultural traction. In 2003, NASA

Robotics_2.indd 255

10/2/2020 12:26:15 PM

256 | ROBOTICS & ARTIFICIAL INTELLIGENCE

Identical Mars rovers Spirit and Opportunity were sent to opposite sides of Mars in 2004 and found evidence that Mars was wetter long ago and that it could have supported microbial life. Originally designed for 90-day missions, Spirit concluded its mission in 2010, while the last communication from Opportunity was in 2018 during a planet-wide dust storm. Via NASA/JPL.

launched two new rovers, Spirit and Opportunity, which landed a few weeks apart in January of 2004. Informed by discoveries made by Pathfinder and Sojourner, one of the chief goals of the Opportunity and Spirit rovers was to investigate evidence of liquid water and water activity on Mars. This was, in turn, part of the effort to discover if Mars had previously had the conditions that might have made it possible for life to evolve on the planet, or if, perhaps, there was still microbial life beneath the surface or hiding in some sheltered part of the Martian landscape. Opportunity and Spirit were the advanced rovers that were being planned even in the early 1990s when NASA sent Sojourner and the Pathfinder craft to the Martian surface. Sojourner, now classified as a “microrover,” was only 66 centimeters (26 inches) long, 48 centimeters (19 inches) wide, and 30 centimeters (12 inches) tall, and it roved the surface on six wheels, each 30 centimeters (5 inches) tall and designed to work independently so that a single stuck wheel would not hinder the robot’s movement. By contrast, Sojourner and Opportunity were around 1 meter (4.9 feet) tall and each weighed some 180 kilograms (400 lbs.). Both

Robotics_2.indd 256

10/2/2020 12:26:19 PM

Space Oddities | 257 machines were equipped with two different kinds of spectrometers to sample and collect rocks, stereo cameras that could be remotely positioned to look at certain features, solar panels to provide continuous energy, and various devices to beam information back to scientists on Earth.10 Spirit and Opportunity unveiled further evidence of a watery past on Mars, including strange circular rocks nicknamed “blueberries” that continue to puzzle scientists, as well as evidence for past glaciations and other signs of water eroding the landscape. The data collected and sent back to Earth by Spirit and Opportunity enlivened the debate about whether or not there had been life on the Martian surface and has led some to believe it is possible that life still exists in pockets of subterranean water. Spirit was retired in April of 2010 after it became stuck in a patch of sandy substrate. Engineers and directors worked for months to try to free the rover. Ultimately, Spirit’s position, once stuck, was such that the machine’s solar panels were not able to collect enough sunlight, and Spirit slowly lost power. Though only intended for a 90-day mission, Spirit was active for six years, and traveled 4.8 miles across the Martian surface before NASA declared the robot “lost” in 2011. Opportunity lasted far longer than Spirit, remaining active until 2018 when a dust storm lasting four months raged over the area it was exploring. Scientists believe that the dust covered the robot’s solar panels, severing the link between the rover and NASA. NASA scientists announced they were abandoning any further attempts to contact the rover on February 12, 2019, marking the end to one of NASA’s most successful experiments in robotic space exploration. Opportunity was active for 14 years and traveled over 28 miles over the Martian surface.11 Whereas Sojourner never reached a level of widespread mainstream popularity, many thousands more Americans and others around the world followed the progress of Spirit and Opportunity. A number of factors contributed to this change in public interest. For one thing, the spread of Internet technology and digital connectivity enabled NASA to allow citizens to connect with the program in more innovative ways. Images and recordings from Spirit and Opportunity were shared by millions, and both machines developed “fan clubs” back on Earth as scientists and lay people alike checked in on the rovers and their progress. Another factor was that the Spirit and Opportunity rovers were designed in such a way as to evoke the non-humanistic robots and droids of science fiction. With distinguishable “arms” equipped with tools, and distinct “heads” mounted on top of neck-like poles, Spirit and Opportunity had personality by aesthetic design, and this helped people connect with them. By contrast, the Sojourner, which looked like little more than a remote-controlled vehicle, did not engender the same kind of emotional and personal interest.

Robotics_2.indd 257

10/2/2020 12:26:23 PM

258 | ROBOTICS & ARTIFICIAL INTELLIGENCE People’s connection with the Opportunity and Spirit robots was such that when an unusual Internet meme circulated, purporting that the Opportunity rover delivered a sad and very humanistic message before NASA lost contact, people around the world eagerly embraced this story as true. This 2019 article from the fact-checking website Snopes discusses the rumored final message of NASA’s Opportunity:

“WAS A MARS ROVER’S FINAL MESSAGE TO NASA ‘MY BATTERY IS LOW AND IT’S GETTING DARK?’” by Dan Evon Snopes, February 18, 2019 Source Document

A poetic interpretation of the Opportunity rover’s final transmission from Mars went viral in February 2019 Claim

The Opportunity rover’s final message to NASA from Mars was “My battery is low and it’s getting dark.” Rating

Mostly False What’s True

The Opportunity Rover’s final transmission to NASA from Mars was a bare-bones data dump indicating it was low on power as a storm approached. What’s False

The Opportunity Rover didn’t actually send the words “My battery is low and it’s getting dark”; this was a poetic translation of the rover’s more technical transmission. Origin

On 13 February 2019, NASA announced that the mission of a robotic rover named Opportunity had

Robotics_2.indd 258

come to an end after the device had spent 15 years exploring the surface of Mars: One of the most successful and enduring feats of interplanetary exploration, NASA’s Opportunity rover mission is at an end after almost 15 years exploring the surface of Mars and helping lay the groundwork for NASA’s return to the Red Planet. The Opportunity rover stopped communicating with Earth when a severe Mars-wide dust storm blanketed its location in June 2018. After more than a thousand commands to restore contact, engineers in the Space Flight Operations Facility at NASA’s Jet Propulsion Laboratory (JPL) made their last attempt to revive Opportunity, to no avail. The solar-powered rover’s final communication was received June 10.

As the online world mourned the loss of the famous Martian explorer, many social media users started sharing a quote that supposedly constituted Oppy’s final transmission to NASA: “My battery is low and it’s getting dark.” This quote quickly made its way around the Internet, shared by celebrities, reporters, and social media influencers. Some people also created tributes to Opportunity that featured the rover’s alleged final transmission. . . .

10/2/2020 12:26:27 PM

Space Oddities | 259

continued

Although many individuals and news outlets cited these words as if they constituted a verbatim reproduction of Opportunity’s final transmission, the phrase was not a literal reproduction of the robotic rover’s last statement. The quote originated with a Twitter thread about Oppy’s demise by [sic] posted by KPCC science reporter Jacob Margolis. At one point, Margolis said the rover’s final transmission “basically” translated as “my battery is low and it’s getting dark.” . . . This quote quickly escaped the context of Margolis’ Twitter thread, however, and started to circulate as if he were replaying a verbatim message from the Mars rover. On 16 February 2019, Margolis published an article on The LAist website explaining how his words came to be taken out of context: My tweet is an interpretation of what two scientists from the Mars Exploration Rover Mission told me. Deputy Project Scientist Abigail Fraeman spoke about what it was like when they realized the June [2018] dust storm was going to be particularly bad, and that Oppy’s life was in danger. They told it to conserve energy. “It’s hard, because you know [the storm’s] coming . . . but there’s nothing you can do to stop it,” Fraeman said. “By Thursday, we knew that it was bad. And then by Friday, we knew it was really bad, but there was nothing we could do but watch. And then it was Sunday, we actually got a communication from the rover and we were shocked,” she said. “It basically said we had no power left, and that was the last time we heard from it.” John Callas, the project manager, offered another poignant detail about the final communication

Robotics_2.indd 259

with Oppy: “It also told us the skies were incredibly dark, to the point where no sunlight gets through. It’s night time during the day.” “We were hopeful that the rover could ride it out. That the rover would hunker down, and then when the storm cleared, the rover would charge back up,” he said. “That didn’t happen. At least it didn’t tell us that it happened. So, we don’t know.”

When NASA announced the completion of Opportunity’s mission in February 2019, the Mars rover hadn’t communicated with the space agency since the previous June. The solar-powered Opportunity was low on power when one of the largest dust storms observed on Mars engulfed the planet and, essentially, turned “day into night,” as the Planetary Society wrote: On June 20th, NASA announced the storm had gone global. To be more accurate, it evolved into a planetary-encircling dust event (PEDE), lofting enough dust into the atmosphere to completely blanket the planet and block out the Sun. In other words, the Red Planet and most all its features were hidden from most orbiting instruments beneath an opaque, beige dust cloud. The atmospheric scientists taking measurements with instruments on the orbiters were soon blocked out.

Oppy sent its final message from the surface of Mars on 10 June 2018: Opportunity downlinked what would turn out to be her final message from Mars on June 10, 2018. The bare bones contents of that missive informed her crew that a monster dust storm was lifting dust all around Endeavour Crater and was turning day into night there. Minutes later, the solar-powered robot field geologist presumably shut down and went into a kind of hibernation mode to wait out the storm.

10/2/2020 12:26:32 PM

260 | ROBOTICS & ARTIFICIAL INTELLIGENCE

“Was a Mars Rover’s Final Message to NASA ‘My Battery Is Low and It’s Getting Dark?’” continued NASA attempted to communicate with Opportunity many times after the storm passed, but the rover never responded. . . .12

The widespread myth that the Opportunity rover had released a plaintive final message to NASA scientists reflects persistent misunderstandings about the state of modern artificial intelligence and robotics. It also stands as a prime example of the “Eliza effect,” discussed earlier in this book, in which humans ascribe humanistic emotion and qualities to other objects and animals because human brains have evolved to search for signs of emotional and sentient contact. Humans will therefore “see” sentience, emotion, and humanistic characteristics even where none exists. The Opportunity rover had absolutely no feelings about being stuck on Mars, seemingly for as long as its mechanical body remained intact. The rumored last words of Opportunity also reflect the deeper understanding of space and of Mars that humans have developed through the data provided by the rovers. Through the images shared mostly by robots of the moon and Mars, people have become intimately acquainted with the reality of space—a cold, empty place that is hostile to life and desolate in the extreme sense of the word. The most recent edition to NASA’s rover family is the rover Curiosity, the most advanced (and largest) robot yet sent into space. Curiosity, which at one ton is roughly the size of a sport utility vehicle, is powered by nuclear isotopes rather than solar energy, which means the machine would not be vulnerable to the same solar-panel obscuring risks as previous rovers. The Curiosity rover was dispatched with the primary mission of determining whether or not Mars had been, or is, capable of hosting life and, since its deployment, Curiosity has revolutionized the study of Mars with data suggesting that organic materials still exist within the 3.5-billion-year-old Martian substrate. Though NASA is, as of 2020, planning to dispatch even more advanced rovers to the Martian surface in the near future, Curiosity’s mission is still only beginning, and the robot will likely continue delivering important scientific data for at least the next decade.

Robotics_2.indd 260

10/2/2020 12:26:36 PM

Space Oddities | 261 It is no accident that the Opportunity and Spirit rovers were designed with more personality than their Sojourner predecessor. Advancing space science has long depended on public interest, as this is the leverage that often determines whether or not politicians will support efforts to fund new projects. While the functionality of NASA’s robotic explorers will remain the organization’s chief goal, designers and engineers are not oblivious to the need to make their creations as people-friendly as possible. This effort can be seen in the Curiosity rover, which was designed with the capability to take photos of itself, or “selfies.” NASA engineers also designed the machine with the capability to use multiple images combined through photo editing to take photos that make it appear as if another individual was taking a photo of the Rover. The robot’s famous “selfies” sometimes involve clever editing of as many as 57 separate images, combined with visual trickery, to create this media-friendly illusion. The Curiosity Rover’s “selfies” are part of the reason that Curiosity has attracted so much attention in the mainstream media and from people who have followed or checked in on its exploration of Mars, providing part of the reason that interest in space technology has again surged among the public in America and elsewhere.13

A Curiosity selfie taken in February 2020 before the rover set a record for the steepest terrain it has ever climbed. Via NASA/JPL-Caltech/MSSS.

Robotics_2.indd 261

10/2/2020 12:26:41 PM

262 | ROBOTICS & ARTIFICIAL INTELLIGENCE Although NASA and politicians have claimed that the goal is to land humans on the surface of Mars, humans are too fragile for most of the research that needs to be done to advance the study of the universe beyond the Earth. Robots like the Opportunity, Spirit, and Curiosity rovers are therefore the most practical way to move the science of space exploration forward. In the unlikely event that humans will someday begin organizing permanent settlements beyond Earth, robots will most likely be an important part of the preparations needed to achieve this goal as well. To help Americans and the rest of the world feel “involved” in this process, NASA has taken steps to make its machines more attractive and relatable to people. In 2019 and 2020, for instance, NASA began issuing “Tweets” purportedly “written” by NASA’s newest rover, Perseverance, which was launched in July of 2020 and is expected to land on the Martian surface in February of 2021. By endowing their robots with personality, whether in the form of educational and informational “tweets,” or “selfies” snapped from distant extraterrestrial vistas, scientists and their public relations colleagues endeavor to keep people involved in this chapter of humanity’s exploration so that humans back on Earth can feel as connected as possible to the experience and to the potential future goals for humanity in space.

CONCLUSION The vacuum of open space is one of the most hazardous environments humanity has ever discovered. Extreme and fatal cold, pressure, radiation, and many other hazards await humans who dare to venture there. Machines are, in many ways, more appropriate space laborers and explorers. They have the capability to move independently and to perform complex and difficult tasks without suffering from any of the hazards deadly to biological life. These robotic voyagers have been humanity’s best tool for learning about the solar system. As of 2020, robots are still exploring space and the surface of Mars, delivering new and increasingly complex data on the universe beyond the Earth.

Robotics_2.indd 262

10/2/2020 12:26:45 PM

Space Oddities | 263

DISCUSSION QUESTIONS • Why are robots better astronauts than humans? • Should NASA continue to invest in robots for space exploration? Why or why not? • Does the myth about the Opportunity rover’s “last words” reflect the Eliza effect? Why or why not? • Should sending humans to Mars be one of NASA’s leading priorities? Why or why not? Works Used Bartels, Meghan. “Curiosity Snaps Epic ‘Selfie’ on Mars as It Moves on to New Martian Sights.” Space. 29 Jan. 2019, www.space.com/43150-mars-curiosity-selfie-leaving-vera-rubin-ridge. html. Dryer, Ben. “Explainer: How Hostile Is Space?” The Conversation.6 Aug. 2014, theconversation. com/explainer-how-hostile-is-space-27903. Evon, Dan. “Was a Mars Rover’s Final Message to NASA ‘My Battery Is Low and It’s Getting Dark?’” Snopes. 18 Feb. 2019, www.snopes.com/fact-check/opportunity-rovers-final-words/. Gill, Victoria. “Nasa’s Voyager 2 Probe ‘Leaves the Solar System.’” BBC News. 10 Dec. 2018, www.bbc.com/news/science-environment-46502820. Hall, R. Cargill. Lunar Impact: The NASA History of Project Ranger. Dover Publications, 2010. Harland, David M. NASA’s Moon Program: Paving the Way for Apollo 11. Springer, 2009. Howell, Elizabeth. “Sojourner: Roving on Mars for the First Time.” Space. 24 Sept. 2012, www. space.com/17745-mars-pathfinder-sojourner-rover.html. “John F. Kennedy Moon Speech—Rice Stadium.” 12 Sept. 1962. NASA. 2020, er.jsc.nasa.gov/seh/ ricetalk.htm. Mann, Adam. “The Mars Rovers.” Space. 28 Feb. 2019, www.space.com/mars-rovers.html. “Mars Pathfinder.” NASA. NASA Science. MARS Exploration Program. 2019, mars.nasa.gov/ mars-exploration/missions/pathfinder/. “MER Tools of the Trade.” Exploratorium. Museum of Science. 2020, www.exploratorium.edu/ mars/mertools.php. “Planetary Voyage.” NASA. JPL. Jet Propulsion Laboratory, voyager.jpl.nasa.gov/mission/science/ planetary-voyage/#:~:text=The%20twin%20spacecraft%20Voyager%20201,moons%20of%20 the%20two%20planets.

Robotics_2.indd 263

10/2/2020 12:26:49 PM

264 | ROBOTICS & ARTIFICIAL INTELLIGENCE Walker, John. “The Oh-My-God Particle.” Fourmilab. 4 Jan. 1994, www.fourmilab.ch/documents/ OhMyGodParticle/.

Robotics_2.indd 264

10/2/2020 12:26:52 PM

THE ULTIMATE STRATEGY MACHINE Robots and the Great Human Strategy Games (1996–Present)

18

Introduction Can a machine outthink a human? The answer is unequivocally “yes.” By the 1990s, computer engineers were able to create algorithms that enabled computers to compete with human players in strategy games. The story of how IBM’s Deep Blue computer was able to compete with the greatest human chess players is relatively well known, but there have also been computers designed to compete in other strategy games, such as checkers and the Chinese game Go. More recently, computers have been tested at triviabased games and with “imperfect information” games like poker. In all cases, it turns out that computers are able to beat human minds with regularity and, in many cases, quite easily. Instant calculations, including complex optimization and advanced analysis, are the key factors that enable computerized “minds” to beat humans, and this process has helped to inform scientists about the kinds of thinking patterns involved in advanced strategic processes. Topics Covered in this Chapter Include: • Deep Blue • Championship chess • Strategy games • Imperfect information games

This Chapter Discusses the Following Source Document: Vincent, James, “Former Go Champion Beaten by DeepMind Retires after Declaring AI Invincible,” The Verge, November 27, 2019

Robotics_2.indd 265

10/2/2020 12:26:56 PM

266 | ROBOTICS & ARTIFICIAL INTELLIGENCE

T

he process of human evolution means that those who survive and have the most offspring have a greater impact on the next generation. The development of the human species is a matter of natural selection (which individuals live and die in each generation) and sexual selection (which individuals have the most reproductive success within each generation). Genes combine in different ways such that no two individuals, even twins, are precisely the same or have exactly the same capabilities. Humans have dramatically variable characteristics and abilities. Some, such as professional athletes, have innate physical characteristics that enable them to perform incredible physical feats. Just as physical characteristics vary from person to person, humans also display variation in intellectual prowess. Some individuals have innate qualities that enable them to perform intellectual tasks at a high level, while others struggle with certain intellectual tasks. Intellectual skills, like physical skills, exist on a spectrum. While any individual can improve intellectual or physical skills, there are innate qualities that enable some individuals to achieve intellectual feats that might be impossible for others. Over the millennia, human cultures around the world have developed cultural traditions to identify and celebrate those with advanced abilities. Professional sports teams and the Olympic Games have long served as a way to recognize athletic skills. People are celebrating the dedication, training, and practice required to master a certain task as well as the genetic variation that grants individuals innate characteristics. Intellectual achievement has also been celebrated through competitive exhibitions such as the Nobel Prize or through intellectual games that pit individuals against one another in contests of strategy.

There are many kinds of games that test intellectual abilities. Among the most popular are games that fall into the category of “abstract strategy,” like chess, checkers, or the Chinese game Go. There are also other types of games in which players lack information that might be useful in creating their strategy, and these games are thus a blend of intellectual skill and random chance. Games like poker and blackjack fall into this category. Other games depend on memory. A person’s memory depends on innate characteristics, but individuals can also train and strengthen memory in specific ways. Trivia contests, popular as television game shows, are among the best-known examples of intellectual games based on the ability to recall information or to deduce correct answers. Since the 1980s, computer programmers have been looking at how to program artificial minds to compete with human minds in strategy and intellectual games. To achieve this, it was necessary to break down the various intellectual processes involved in each game and to find a way to translate this

Robotics_2.indd 266

10/2/2020 12:27:00 PM

The Ultimate Strategy Machine | 267 process into a coded algorithmic process. Learning how to build intellectual machines has enlightened scientists about the processes involved in human intellectual activities and also demonstrates the limitations of the human mind.

Deep Strategy Chess is the most popular strategy game in Western culture. It was actually invented in India around the sixth century and was then imported into Persia, from where it spread through the Muslim world. During the great intellectual flowering of the Muslim empires, chess was a well-respected game of the intellectual elite, but it also had a dedicated following among the lay public. Beginning in 711, Muslim forces invaded and later conquered the Iberian Peninsula, taking control of what is today modern Spain and Portugal. This is how a quintessential Islamic game became westernized as it spread from Spain throughout Europe. The ancient game of chess differed considerably from how the game was changed in the 1500s, creating the modern rules still used by players around the world.1 The basics of chess are simple. It is a two-player strategy game consisting of 64 squares in an 8 × 8 square grid. Each player begins with 16 pieces and there are 6 different types of pieces, each of which can be moved across the board in different ways. The object of the game is to capture the other player’s pieces and ultimately the king by moving, within the specified rules, to “jump” the other player’s pieces. To win at chess, players must be able to think ahead, both in terms of their own and their opponents’ actions. Chess strategy involves being able to imagine the various ways that one’s opponent might react to each move and then to move to protect one’s own pieces from elimination while attempting to eliminate opponent’s pieces. According to “TheChessWorld,” an online magazine for chess enthusiasts, some of the best players in the world claim that they can calculate the possible permutations that a game might take up to 15–20 moves ahead.2 The further a chess strategist goes into the possible future, the more difficult, as each set of moves forward brings about a whole new set of variables that could impact how the game proceeds.

Robotics_2.indd 267

10/2/2020 12:27:04 PM

268 | ROBOTICS & ARTIFICIAL INTELLIGENCE

A Very Turkish Game The creation of machines capable of calculating strategy is also an intellectual pursuit, and it should not be surprising that intellectuals interested in strategy also imagined the possibility of building machines that could calculate odds and made strategic decisions. In 1769, Hungarian engineer and inventor Baron Wolfgang von Kempelen built a “machine” that featured a chess board and an automaton player made to look like a Turkish mystic, complete with a turban. Von Kempelen claimed that the machine could beat the world’s strongest players, and its first exhibition was at the Habsburg Court in 1770, followed by a series of sporadic exhibitions occurring over the next 84 years, until the machine was purchased by a Bavarian huckster named Johann Nepomuk Maelzel. As the “Mechanical Turk,” or the “Turk,” as it was known, made the rounds across Europe, some of history’s most famous intellectuals and leaders visited and even engaged the machine to test their prowess. In 1783, famed American statesman Benjamin Franklin, visiting Paris, challenged the Turk to a game, which he lost. In 1809, French general Napoleon Bonaparte also lost to the infamous machine. Newspapers carried thousands of articles about it, some of which featured fantastic, yet vague, claims about the machine’s function. In reality, Kempelen, and Maelzel after him, were pulling the wool over the world’s eyes. The Mechanical Turk actually had a false chamber where a human chess master could sit, manipulating the arms of the mechanical Turk from below. By employing master chess players, Kempelen and Maelzel convinced many observers that they had succeeded in the marvel of mechanical strategy, and the mystery behind the machine remained hidden for centuries.a Interestingly, Amazon copied the name of the infamous mechanical hoax for a new venture known as “Amazon Mechanical Turk,” a job recruitment service with a different concept. Mechanical Turk allows employers in 49 countries to post jobs, which they call human intelligence tasks (HIT), and these jobs can then be claimed by workers as private contractors. Jobs offered through the system are those that humans currently perform better or more efficiently than machines. Works Used a. Fourtané, Susan. “The Turk: Wolfgang von Kempelen’s Fake Automaton Chess Player.” Interesting Engineering. 31 Aug. 2018, interestingengineering.com/the-turk-fake-automaton-chess-player.

Robotics_2.indd 268

10/2/2020 12:27:08 PM

The Ultimate Strategy Machine | 269

Engraving from 1789 pamphlet of the secret workings of “The Turk.” Joseph Racknitz, via Wikimedia.

Robotics_2.indd 269

10/2/2020 12:27:13 PM

270 | ROBOTICS & ARTIFICIAL INTELLIGENCE The roots of mechanical chess can be traced back to AI pioneer Alan Turing, who, in the mid1940s, tried to create the kinds of algorithms necessary to enable a machine to play chess. American engineer Claude Shannon, another towering figure in the history of robotics and AI, was working on a similar system in 1949 while working for Bell Labs. Turing won the race, creating the first chess algorithm around 1950, a “paper machine” later called “Turochamp.” However, there was no computer capable of playing chess using the algorithm that Turing created and so Turing played a game against a friend in which he used the algorithm he had created to calculate every move he made. Because Turing, though undoubtedly one of the world’s greatest mathematical minds, could not calculate with the rapidity (or accuracy) of a machine, it is said that each move in Turing’s algorithmic-generated chess game took a half hour or more to complete. In the end, Turing lost, despite the algorithm, but he had proven the concept, and a 1953 paper featuring his algorithm ignited interest in creating a true chess computer.3 From the 1950s to the 1990s, computer chess advanced in small steps. The first program capable of playing a complete game of chess against a human was created in 1957 at IBM by researcher Alex Bernstein. The program took roughly 8 minutes per move, but could beat many human players. The invention of graphic user interfaces (GUI), which allows users to interact with a computer by manipulating visual images on a screen, changed the computer chess effort because it allowed players to play a visual game against a computer. The first graphic chess system was developed by NASA researcher Chris Daly in 1970, a prototype that would presage many future systems. In the late 1970s, a successful series of computer chess programs, under the name Sargon, were created for the first generation of personal computers. In 1980, engineers combined robotics with chess by creating the Novag Robot Adversary (1982), a robotic arm chess computer that would move the actual pieces on the board to play a human player. The first widespread chess software was Chessmaster 2000, which was released in 1986 for the Commodore 64, Amiga, and MS-DOS computers. Along the way, programmers had experimented with chess programs intended to have the strategic capabilities to perform against the world’s greatest chess masters. Chess masters played against computers a number of times in the 1980s, but the programs were insufficient to beat a human player, mainly because winning a game of chess requires advanced predictive abilities. Human chess players also use intuition and psychological gambits to “trick” opponents into making moves that leave them vulnerable. Although a computer system might be able to calculate odds faster and more accurately than a human, predicting and adapting to an opponent’s moves requires a subtle suite of cognitive skills that could

Robotics_2.indd 270

10/2/2020 12:27:19 PM

The Ultimate Strategy Machine | 271 not be programmed into a computer at that time. However, programmers working on chess algorithms in the 1970s and 1980s were able to learn from matches against advanced human players, and this enabled them to refine and change their algorithms in concert. In 1996, a program called Deep Blue, created at IBM labs, became the first chess computer to beat a human grandmaster, Russian Garry Kimovich Kasparov, who was then the World Chess Champion. In 1985, Kasparov had competed in an exhibition pitting himself against 32 computers, each designed to play chess. At that time, Kasparov struggled a bit, with the exhibition lasting for 5 hours, but he had beaten all 32 computers. In 1996, Kasparov was challenged to play against IBM’s cutting-edge supercomputer, and he agreed. The 3,000-pound computer proved a tough challenger, and the computer and Kasparov competed in six hard-fought games. Kasparov won four, but Deep Blue managed to win two.

World chess champion Garry Kasparov (by S.M.S.I., Inc., Owen Williams, The Kasparov Agency) played IBM supercomputer Deep Blue (by James) twice, winning the 1996 match 4 games to 2 but losing the following year, marking the first computer defeat of a reigning world champion. Via Wikimedia.

Robotics_2.indd 271

10/2/2020 12:27:23 PM

272 | ROBOTICS & ARTIFICIAL INTELLIGENCE The 1996 match was the first time a computer had been able to beat a grandmaster, and many in the global chess community watched the game out of fascination but also with fear. AI was in its infancy, and, at the time, the dream of computers with humanistic intellectual qualities was still in the realm of science fiction. For many, chess represents the ultimate human test of imagination, intellectual calculation, and strategy and, until the late twentieth century, human minds were still the best computers the world had ever known. Sure, computers could calculate more rapidly and accurately, but when it came to the true intellectual arts, the kind that required a mix if intellectual skills, humans still reigned supreme. In 1996, when Deep Blue beat Kasparov in two of their six matches, it was a signal that the era of human intellectual supremacy was coming to an end. Kasparov’s victory was a relief to Kasparov and many chess enthusiasts, who saw it as an affirmation of human mental superiority.4 he next year, Kasparov was invited back to play a redesigned Deep Blue, nicknamed “Deeper Blue.” Kasparov agreed and, in another series of six games, Deep Blue defeated Kasparov 3 games to 2. The crucial moment in the deciding game came when Kasparov, playing black, made an error. Using a strategic system known as the Caro-Kann Defence, Kasparov transposed two moves. The computer saw the error, took advantage, and in 11 moves defeated Kasparov. Kasparov argued that the computer had cheated and many believed him, accusing IBM of using some trickery that would invalidate the outcome, but subsequent analyses have proven that this was not the case. Writing about the game in The Conversation, computer scientist Mark Robert Anderson explains: “The reality was that Deep Blue’s victory was precisely because of its rigid, unhumanlike commitment to cold, hard logic in the face of Kasparov’s emotional behavior. This wasn’t artificial (or real) intelligence that demonstrated our own creative style of thinking and learning, but the application of simple rules on a grand scale.”

The turning point in the match came in the second game. Kasparov became frustrated when the computer failed to fall into a gambit known in chess as a “poisoned dawn.” The computer’s ability to avoid responding to the strategic move troubled Kasparov and many others, who argued that the computer’s strategy could not have been calculated but was intuitive, the kind of thinking that belongs firmly in the realm of humans. In reality, the intuitive move had been the result of a calculation. Programmers had given Deep Blue a strategy in which

Robotics_2.indd 272

10/2/2020 12:27:27 PM

The Ultimate Strategy Machine | 273 the computer would avoid creating what is called an “open file,” which is an empty path for rooks or queens to attack. Subsequent analysis has shown that, had Kasparov kept his head, he could have forced the computer into a draw.5

Go For It While chess might be the most famous strategy game in the Western world, it is not the most popular. The most popular is also the world’s oldest strategy and board game, known as “Go” or “Weiqi.” Go was first developed in the Zhou Dynasty (1050–771 BCE) and has been played by thousands around the world since. It is estimated that, worldwide, there are 300 times more Go players than chess players. Go is a two-player abstract strategy game played on a 19 × 19 grid. Unlike chess, pieces in Go, which are little black or white stones, do not move on the board, they are simply placed in a position on the board, after which they cannot move. The idea is to place one’s stones on the intersection of lines crisscrossing the board. Players cannot pass over other player’s pieces and so, by positioning one’s pieces, a player divides the board into territories. Stones can be captured by other players, if the stone can be completely surrounded. The player who claims more than 50 percent of the board wins. Between tournament-level players, an advanced game can take more than six hours to complete.6 From a mathematical standpoint, Go is more complicated than chess. Each move in Go can involve hundreds of different options, with hundreds of different ways that one’s opponent might respond. In chess, after the first two moves, it is calculated that there are 400 possible next moves that an opponent can take. After the first two moves are completed in a game of Go, there are around 130,000 possible moves remaining. Because of the variety of ways that a Go game can proceed, the total number of permutations in a Go game is incalculable. Mathematicians have found that there are more possible positions that can be achieved on a Go board than there are atoms in the known universe. Because of the incredible mathematical complexity of Go, it was much more difficult to program a computer that can play Go at higher levels. In part, this is because Go is too complex for the approach known as “brute force,” which is where a computer is programmed with a map of all possible game options. IBM’s Deep Blue beat Kasparov using brute force, calculating all possible options and choosing optimal moves from these options. This is impossible, even for a computer, when it comes to Go. Human players must rely on intuition when deciding on moves, making decisions based on an instinctive feeling rather than on

Robotics_2.indd 273

10/2/2020 12:27:31 PM

274 | ROBOTICS & ARTIFICIAL INTELLIGENCE conscious reasoning. Thus, although Go is a strategy game, it is a strategy game that depends on the human capability to reason and to feel. Given this complexity, it is unsurprising that it took programmers longer to invent a program that could convincingly play Go than it did for chess. It was not until 2015 that an AI system of sufficient complexity was developed, AlphaGo, which was created by Google’s DeepMind system. The key to the system was that it was a learning computer. First, the AlphaGo system was programmed with 30 million separate moves, taken from games played by human players. This enabled the computer to predict the next move that a human player would make around 57 percent of the time. But the real key to the program was the ability for AlphaGo to learn new strategies beyond what it had learned from human players. To achieve this, engineers programmed the computer with two different but interrelated systems. First, the computer was able to simulate each move and then to simulate the game to completion, rejecting moves that resulted in failed scenarios. Second, the AlphaGo computer was equipped with a neural network, a kind of AI technology still very much in its infancy in 2015 that mimics the processes of the human brain.7 In 2016, AlphaGo beat three-time European Go champion Fan Hui in five separate games and then defeated the world’s leading Go player, Lee Se-dol, in two separate matches. Not only had AlphaGo become the first AI to play against and defeat Go professional players, but it also proved that the system Google programmers had created was sufficient to beat the world’s best players. South Korean Go champion Se-dol had a very emotional reaction to his defeat at the hands of AlphaGo, announcing his retirement from the game. In this 2019 article from The Verge, James Vincent discusses the psychological crisis brought about by Se-dol’s loss.

Robotics_2.indd 274

10/2/2020 12:27:35 PM

The Ultimate Strategy Machine | 275

“FORMER GO CHAMPION BEATEN BY DEEPMIND RETIRES AFTER DECLARING AI INVINCIBLE” by James Vincent The Conversation, November 27, 2019 Source Document

‘Even if I become the number one, there is an entity that cannot be defeated’ The South Korean Go champion Lee Se-dol has retired from professional play, telling Yonhap news agency that his decision was motivated by the ascendancy of AI. “With the debut of AI in Go games, I’ve realized that I’m not at the top even if I become the number one through frantic efforts,” Lee told Yonhap. “Even if I become the number one, there is an entity that cannot be defeated.” For years, Go was considered beyond the reach of even the most sophisticated computer programs. The ancient board game is famously complex, with more possible configurations for pieces than atoms in the observable universe.

and it ended like this. I wanted it to end well.” Despite the outcome, Go experts agreed that the tournament produced outstanding play. AlphaGo surprised the world with its so-called “move 37,” which human experts initially thought was a mistake, but which proved decisive in game two. Lee made his own impact with his “hand of God” play (move 78), which flummoxed the AI program and allowed Lee to win a single game. He remains the only human to ever defeat AlphaGo in tournament settings. (During training AlphaGo lost two time-capped games to Go player Fan Hui.) Since the tournament, though, DeepMind has only improved its AI Go systems. In 2017, it created AlphaGo Zero, a version of the program which surpassed even AlphaGo.

This reputation took a knock in 2016 when the Google-owned artificial intelligence company DeepMind shocked the world by defeating Se-dol four matches to one with its AlphaGo AI system. The games had a global impact, alerting the world to a new breed of machine learning programs that promised to be smarter and more creative than AI of old.

While the original AI learned to play Go by studying a dataset of more than 100,000 human games, AlphaGo Zero developed its skills by simply playing itself, over and over. After three days of selfplay using hugely powerful computer systems that let it play games at superhuman speeds, AlphaGo Zero was able to defeat its predecessor 100 games to nil. DeepMind said at the time that AlphaGo Zero was likely the strongest Go player in history.

Lee, who was the world’s number one ranked Go player in the late 2000s, initially predicted that he would beat AlphaGo in a “landslide” and was shocked by his losses, going so far as to apologize to the South Korean public. “I failed,” he said after the tournament. “I feel sorry that the match is over

In a statement given to The Verge, DeepMind’s CEO Demis Hassabis said Lee had demonstrated “true warrior spirit” in his games with AlphaGo. Said Hassabis: “On behalf of the whole AlphaGo team at DeepMind, I’d like to congratulate Lee Se-dol for his legendary decade at the top of the

Robotics_2.indd 275

10/2/2020 12:27:39 PM

276 | ROBOTICS & ARTIFICIAL INTELLIGENCE

“Former Go Champion Beaten by DeepMind Retires after Declaring AI Invincible” continued game, and wish him the very best for the future . . . I know Lee will be remembered as one of the greatest Go players of his generation.” According to Yonhap, Lee isn’t completely giving up on playing AI, though. He plans to commemorate his retirement in December by playing a match against a South Korean AI program called HanDol,

which has already beaten the country’s top five players. Lee will be given a two-stone advantage. “Even with a two-stone advantage, I feel like I will lose the first game to HanDol,” Lee told Yonhap. “These days, I don’t follow Go news. I wanted to play comfortably against HanDol as I have already retired, though I will do my best.”8

Although Se-dol’s existential angst may be understandable to many, it is perhaps misplaced. Even prior to Se-dol’s defeat by AlphaGo, Se-dol did not have the “best” human brain. Human minds are shaped by the activities that the user engages in, by experience, exposure to new experiences and situations, and by one’s emotional stability. A human mind is a complex thing capable of a wide variety of tasks, and mastering one of those tasks, does not prove one’s superiority at anything but that task and only in relative comparison to other humans who have tried to complete the same task. While Se-dol and thousands of others have struggled to master Go, engaging in complex strategic thought, while also utilizing intuition and imagination, to defeat their opponents, the skills one might develop through this process would not enable Se-dol or any of those other players to participate in other intellectual tasks with the same level of ability. Nor are the intellectual arts involved in strategy superior to the intellectual processes involved in producing works of creative art. To view Se-dol’s loss to AlphaGo, or Kasparov’s loss to Deep Blue, as a victory of machine over human is subjective. There are many activities in which humanity cannot be the best. Humans participate in contests that involve jumping, for instance, despite the fact that no human can ever best the red kangaroo in jumping prowess. Humans compete in running contests, though they can never hope to match the speed or endurance of many animals. Humans compete in swimming contests despite never being able to achieve the swimming prowess of even the simplest fish. Humans have long been willing to accept that the species is not physically superior to many animals but have been far more reluctant to abandon the idea that humanity has superior brains.

Robotics_2.indd 276

10/2/2020 12:27:44 PM

The Ultimate Strategy Machine | 277 The human brain is a complex machine capable of all manner of intellectual activity, but it is an imperfect machine. Many systems of ancient wisdom reflect this basic fact of human existence, and many philosophers have urged humanity to embrace the idea that it is the quest for perfection that provides the reward, rather than attaining this unattainable goal. The human body and the human mind can never be perfected for any single task either, because both the body and mind need to also be able to do other things. A human cannot spend 100 percent of their life performing a single task to perfection because they must eat and drink, engage in social interactions, perhaps perform at a job, in addition to hundreds of other physical and mental activities. Humans are not “built” with any specific purpose. For a human to win a game of strategy might require emotional stability, intuition, even bravery. The player must World champion Go player Lee Se-dol, by balance their knowledge and concentration on the game LG Electronics, via Wikimedia. against a variety of other factors. While Se-dol was playing AlphaGo, for instance, parts of Se-dol’s mind were also busy with other tasks. Perhaps he was imagining how it would feel to win or lose. Perhaps he was thinking about how his fellow Go players would react to the outcome. Perhaps he was thinking of loved ones. Perhaps he was hungry, or thirsty, or perhaps one of his feet had fallen “asleep,” and he needed to flex it to encourage blood flow. None of these qualities belong to a machine. Machines can start with mass amounts of data that can be recalled with perfect accuracy and often instantly. From there, the machine needs only to follow a set of predetermined guidelines to achieve victory. Its decisions are consistent and dispassionate, because the machine does not care. It does not “want” to win, and it doesn’t care if it loses. It approaches each move in the game with the same level of urgency, the same level of attention, and without any distracting emotional complications. In fact, even the claim that Se-dol was “beaten” by AlphaGo is perhaps a mistaken use of the term. Although the computer performed better than Se-dol, the computer was not really “playing,” the game at all. The computer simply followed a set of rules and patterns governed by its internal algorithms. This is not intelligence or intuition, it is merely the illusion of those qualities. By contrast, the fact that Se-dol, with his imperfect human brain, could

Robotics_2.indd 277

10/2/2020 12:27:48 PM

278 | ROBOTICS & ARTIFICIAL INTELLIGENCE even hope to compete with the computer reflects Se-dol’s unusual intellectual and intuitive prowess. Se-dol proved that he has a finely honed mind and an uncanny understanding of strategy, and thus he demonstrated intelligence and intuitive skill. The computer followed a set of directions, armed with exact data from millions of Go matches. It was neither intelligent nor intuitive, but it performed exactly as it was programmed to perform.

Intellect versus Creation Art, whatever form it takes, is an expression of emotion, and so it will not be possible for a machine to create art until machines have independent emotion. Some argue that machines can already create art. There are technologically minded artists who have built machines that can paint or sculpt, or that use the mechanically determined movements to create works of art, but these works are art because the artist’s emotional perspective is a part of the process. The machine does not, therefore, make art; it is a tool utilized by an artist to create art. In the future, it may well be possible to create machines that can emulate human emotion and can, therefore, make art, but making this leap requires a very different set of decisions than have been made in the quest for AI as of the 2020s. It can be argued that, until science can create beings with emotion, sentience, and free will, humans and machines cannot legitimately compete at any task, be it physical or intellectual. Garry Kasparov and Lee Se-dol, therefore, were not actually beaten by machines, they were beaten by entire teams of engineers and researchers, whose accomplishments were based on teams of researchers who came before them, whose accomplishments were based on the discoveries of those in the previous generation. Kasparov and Se-dol were not therefore playing an “entity” when they competed in their now famous matches, they were playing against the collective human accomplishments of brilliant minds. From this perspective, it is no surprise that both men lost their contests, nor were those losses legitimate reflections on their intellectual skills.

Robotics_2.indd 278

10/2/2020 12:27:52 PM

The Ultimate Strategy Machine | 279

CONCLUSION Despite humans complaining of losing mind-sports competitions to robots, robots have not beaten humans at anything. Rather, human engineers have created machines capable of performing certain tasks more efficiently than is possible for a human brain. Machines have long been “superior” to human effort in a number of physical tasks, and the fact that machines are now able to perform better than humans at certain intellectual tasks should come as no surprise. These robotic achievements are not signifiers of humanity’s decline or replacement, but rather a testament to human intelligence and more importantly imagination, something that robots, as yet, do not possess.

DISCUSSION QUESTIONS • How do robots reflect the collective intelligence of humankind? • Do intelligent robots pose a threat to humanity? Why or why not? • How is the achievement of Deep Blue and Alpha-Go a reflection of the history of global research into computers? Works Used Anderson, Mark Robert. “Twenty Years on from Deep Blue vs. Kasparov: How a Chess Match Started the Big Data Revolution.” The Conversation. 11May 2017, theconversation.com/ twenty-years-on-from-deep-blue-vs-kasparov-how-a-chess-match-started-the-big-datarevolution-76882. Averbakh, Yuri. A History of Chess: From Chaturanga to the Present Day. Russell Enterprises, Inc., 2012. Friedel, Frederic. “Reconstructing Turing’s ‘Paper Machine.’” Chessbase. 23 Sept. 2017, en.chessbase.com/post/reconstructing-turing-s-paper-machine. Libby, Matthew. “The Brain Across the Table: Garry Kasparov vs. Deep Blue, 1997.” Medium. 9 June 2019, medium.com/@matthewlibby_75648/the-brain-across-the-table-garry-kasparovvs-deep-blue-1997-7904f77cebf7.

Robotics_2.indd 279

10/2/2020 12:27:56 PM

280 | ROBOTICS & ARTIFICIAL INTELLIGENCE Markushin, Yury. “5 Chess Myths That Most People Believe.” The Chess World., 19 Oct. 2015, thechessworld.com/articles/general-information/5-chess-myths-that-most-people-believe/. Moskowitz, Marc L. Go Nation: Chinese Masculinities and the Game of Weiqi in China. U of California P, 2013. Muoio, Danielle. “Google’s AI Is So Advanced It Beat a Human at a Game No Computer Has Ever Mastered.” Business Insider, 28 Jan. 2016, www.businessinsider.com/googles-artificialintelligence-beats-human-at-go-2016-1. Muoio, Danielle. “Why Go Is So Much Harder for AI to Beat Than Chess.” Business Insider, 10 Mar. 2016, www.businessinsider.com/why-google-ai-game-go-is-harder-than-chess-2016-3. Vincent, James. “Former Go champion Beaten by DeepMind Retires after Declaring AI Invincible.” The Verge, 27 Nov. 2019, www.theverge.com/2019/11/27/20985260/ai-go-alphago-lee-se-dolretired-deepmind-defeat.

Robotics_2.indd 280

10/2/2020 12:28:00 PM

HOMEBOTS Robots in the Home (1998–Present)

19

Introduction Throughout the years, one of the most common visions of the future of robotics has been focused on the way that robots might be involved in the home. From the robotic maid featured on the futuristic cartoon The Jetsons, to the increasingly realistic concept of outfitting homes with artificial intelligences that can monitor conditions and manage home security, domestic robots have been a common theme in science fiction for many decades. The reality of robots in the home began with robotic toys, but has expanded to include robotic pets and even robotic tools to help clean and manage the home. Today’s robotic home companions and tools range considerably in price and capabilities, representing both the practical and the recreational potential of home robotics. Advancements in both of these fields have made the idea of domestic robotics more of a reality for many Americans and others around the world and hold clues as to how home robotics systems will evolve in the future. Topics Covered in this Chapter Include: • Robotic pets • The Roomba • The Furby • Sony’s AIBO

This Chapter Discusses the Following Source Document: Carey, Bridget, “My Week with Aibo: What It’s Like to Live with Sony’s Robot Dog,” CNET, November 28, 2018

Robotics_2.indd 281

10/2/2020 12:28:04 PM

282 | ROBOTICS & ARTIFICIAL INTELLIGENCE

T

he children’s cartoon series The Jetsons, which initially ran for a single season in 1962 and 1963, has been called the prime example of American dreams of a “techno-utopia.” The series follows the Jetson family living in Orbit City, a sprawling cityscape designed in the “Googie” style, now often called “retrofuturism,” in which all the buildings were raised above the ground level and all traffic was aerial. The residents of Orbit City navigated their lofty metropolis via flying cars or crossed the city on massive aerial conveyor belts and communicated with one another via video phones. George Jetson’s largely unspecified job The Jetsons (clockwise from left): Rosie the robot, George, Jane, Judy, Elroy, and the family dog, Astro, via Wikimedia. at a company called “Spacely Space Sprockets” saw him sitting at a console of sorts pushing buttons and flicking levers. It is explained in at least one episode that Jetson works a three-hour day and does this three days a week.1

Technology, in the Jetsons universe, was ubiquitous. The home was filled with technological devices performing all kinds of activities and familiar household tasks, such as vacuuming, cleaning, or cooking dinner. A small remote vacuum could be activated to seek out and clean up messes. Food was prepared automatically from a machine that could be “programmed” with punch cards to serve a wide variety of items, including fried chicken, pizza, and something called “flying pizza.” When it came time for grooming, the Jetsons sat in a chair where robotic arms combed hair, helped the user put on their clothes, brushed their teeth, and dusted them with robotic brooms. The automated home that the Jetsons enjoyed is part of the inspiration behind smart home technology, gadgets that link home technology to WiFi networks and automated programming systems to control lighting, utility use, and other features of the home both remotely and automatically. Mobile digital devices and personal computers have meanwhile brought the Jetson’s video phone system to life, and, while flying cars are still a distant idea that will

Robotics_2.indd 282

10/2/2020 12:28:07 PM

Homebots | 283 most likely never come true, some of the Jetson’s robotic household helpers are close to reality. The most famous robot of the Jetson’s world, the family’s robotic maid “Rosie,” is still something to which the world of robotics only aspires. It is doubtful that the writers and producers of the The Jetsons intended to say anything meaningful about AI and its role in the world, but the character of Rosie was depicted as the kind of robotic “helper” that American families might enjoy in the future. Rosie was not supposed to be programmed for emotion, but in the eighth episode Rosie forms a romantic relationship with a makeshift robot named Mack, who is constructed from buckets and an old filing cabinet by their building’s super. Mack is deactivated due to malfunction, but their relationship is allowed to continue via a daily chat with Mack’s consciousness. Interestingly, Rosie is displayed as a far cry from the ideal robot. It is explained that she is an older model, though the family decides not to replace her due to their feelings for her. The robots in the Jetson’s universe were also frequently malfunctioning, a feature no doubt meant to reflect the unreliable computer technology that was available in the early 1960s. Malfunctions aside, the idea of a household Rosie continued to inspire engineers and roboticists into the modern era. However, it quickly became clear that the technology needed to create a functional household robot was beyond the science of the time.2

Jumping the Gun In the late 1970s, Quasar Industries made major news when the company announced it was ready to begin selling robots as domestic assistants. Quasar published numerous articles and gave television demonstrations of its machines, which appeared to show robotic servants vacuuming, working in the kitchen, and cleaning the house. There were two basic models advertised: Klatu the “Robot Butler” and the “Maid Without Tears,” a feminine-looking robot that was supposed to be able to cook and clean. In the late 1970s, Quasar claimed that they were about to begin selling these models to the general public and that their voice-activated, multi-purpose machines, which would retail for $4,000.00, were poised to become the technology of the future. A 1978 article by company head Gene Beley claimed: Android Amusement Corporation Robots are going to be part of our everyday lives, and Quasar Industries seems to have a head start on getting us there. Gene presents the story of Quasar in a light-hearted fashion, and whets the imagination for future developments. . . . Nine years before Star Wars jetted through

Robotics_2.indd 283

10/2/2020 12:28:11 PM

284 | ROBOTICS & ARTIFICIAL INTELLIGENCE the movie theaters of the world, introducing two lovable robots, Quasar Industries, Inc. of New Jersey gave birth to a full-size working ’droid, Klatu. Even though Klatu was the result of more than 40 designs submitted by an eight-man team of engineers and scientists, of whom nearly all succumbed to death or serious illnesses before his successful completion, there was no worldwide media fanfare. In fact, Klatu was quickly put to work to help pay R&D costs. From the very beginning, Quasar Industries began leasing the robot out to corporation and others for an attentiongetting marketing tool. To this day, Klatu and his 31 brother and sister robots lend their 15-square-foot conical-shaped bodies for displaying various graphics and logos of major corporate clients like Panasonic, Ingersoll-Rand, I.T.T., major banks, and others who can afford their star-billing rates. Currently, they are leasing for $700-$1500 a day, plus expenses. These robot stars fly first class on commercial jets when they travel to engagements. Moreover, each robot is accompanied by two robot technicians wherever they travel. Quasar Industries now has 32 working ’droids, which they call Sales Promotional Androids, or SPA’s for short. In addition, Quasar Industries has working prototypes of the Domestic Android, robot-servant, which will be marketed within two years for approximately $4,000; a seven-foot high security-guard robot with a $75,000 price tag; and a Para-Medic Robot that will work in hospitals that can afford the $50,000 tariff.3 The goal, for Quasar, was to attract investment capital, and it is unclear how well the company managed to do this. No robots were ever sold because the company was entirely fraudulent. Anyone who knew anything about the state of computers and robotics in the late 1970s knew this immediately. “I don’t think there’s a ghost of a chance that a mad scientist could come up with this, no less Quasar,” Marvin Minsky, AI pioneer and MIT professor told the New York Times in 1977. “They can’t realize all these claims, there’s no doubt about that. If you spent a lot of money, you might get something in 1990 that could do some of the things they claim.”4 In reality, Quasar Industry’s Klatu was operated by remote control, with one person controlling the “robot’s” movements and another providing a remote voice. In exhibitions held

Robotics_2.indd 284

10/2/2020 12:28:15 PM

Homebots | 285 in department stores and other venues, the robot was shown moving in response to voice “commands.” The Maid Without Tears was, by contrast, a woman in a robot suit who walked around during some of the exhibitions but did not display any actual functionality.

Advanced Toys Though unscrupulous companies like Quasar claimed that the technology for household robots was available in the late 1970s, other companies decided to shoot for something a little more achievable: toys that incorporated some functional elements of actual working robots. The robot toy trend of the 1980s was unusual in a number of ways. The toys were far more expensive than what one might expect to pay for a child’s toy and so were aimed more at adults, who might consider purchasing one for their family. Arguably, the first Omnibot 2000, manufactured by Japanese company Tomy, was the first legitimate robotic device available to American consumers. The robot had limited functionality. It ran on a 6-volt and two AA batteries, had a clock and a cassette player, and could be driven around the home via remote control. The robot’s claw-like hands could be used to lift small objects.5 Omnibot 2000 cost $600.00 when it debuted in 1985, the equivalent of $1,400.00 in 2020, so the toy was not one for the average Christmas tree. Rather, Omnibot appealed to upper-class families in which one or more members had a passion for computers or electronics. More specifically, devices like the Omnibot were aimed at adults who had grown up with The Jetsons or Forbidden Planet, leaving them with a lasting fascination with the idea of robotic home companions. They also appealed to the first Star Wars generation, who had been attracted to robotics and electronics by fictional helper droids like R2-D2.6 The same year that Tomy introduced Omnibot, Arctec Systems introduced what it billed as an entirely “autonomous robot,” the pillar-shaped GEMINI. The GEMINI robot was a self-navigating, self-charging robot with multiple functions. Advertising claimed that the family could gather around and say, “GEMINI,” loudly, and the robot would respond, “GEMINI LISTENING” and enter voice command mode. Unlike Japan’s Omnibot or the fake Quasar hoaxbots of the late 1970s, GEMINI was a voice-activated machine, employing technology that would not become widely used in the home until the introduction of Internet-capable virtual assistants like Siri and Cortana decades later. GEMINI used a now-defunct system called VOCOL that enabled the robot to follow relatively complex instructions. Further, the robot came with a version of BASIC installed, and the accompanying instruction manual provided the information to program the robot for new functions. Users were even encouraged to share any coding innovations they had created back with the parent company, potentially to be installed in new models.

Robotics_2.indd 285

10/2/2020 12:28:19 PM

286 | ROBOTICS & ARTIFICIAL INTELLIGENCE While GEMINI had the kind of features that Americans had long dreamed about in a robotic companion, the cost was daunting. The basic GEMINI system cost a whopping $6,995.00 in 1985, the equivalent of nearly $17,000.00 in 2020. GEMINI was something that only the wealthiest families, or the recklessly non-frugal, could obtain. Arctec Systems did well enough that two additional models of GEMINI were introduced, including a more complicated one whose features were more suited to computer science experts. High cost aside, GEMINI was as close as the world came, in the 1980s, to the kind of home robotics long promised by science fiction enthusiasts and predictors of humanity’s techno-utopian future, and even this was a far cry from the functionality needed to make robots actually be helpful in the home rather than simply a form of entertainment.7

Functional Robot Helpers From the brief home robot fad of the 1980s, robotics corporations took the idea of domestic robots in a number of different directions. On one hand, the company iRobot, named after Isaac Asimov’s now-famous series of robot fiction, debuted in 1990 and eventually offered one of the most functional domestic robots in history, and one that remains among the most popular robotic assistants in America: the Roomba. The iRobot Corporation was founded in 1990 by former Massachusetts Institute of Technology (MIT) roboticists Colin Angle, Helen Greiner, and Rodney Brooks. The company initially worked on developing extremely high-tech robots for professional applications. The company’s first design, Genghis, debuted in 1991 and was aimed at space exploration. Their next model, Ariel, named after the lead character in Hans Christian Anderson’s Little Mermaid, was designed to detect and eliminate mines in surfing areas.8 In the late 1990s iRobot became a

Robotics_2.indd 286

A US Navy sailor with a PackBot, part of an explosive ordnance disposal team. US Navy photo by Jhi L. Scott, via Wikimedia.

10/2/2020 12:28:23 PM

Homebots | 287 major player in developing robotics systems to work in dangerous situations, including bomb disposal. Their PackBot robots were useful in searching through areas affected by disasters and were later purchased by the US military for assistance in search and rescue operations. The PackBots also helped search the wreckage of the World Trade Center after the 2001 terrorist attacks, and since that time, the US military has purchased thousands of PackBots for help in searching for ordinance, rescue operations, and to conduct surveillance.9 While iRobot, which spawned a number of subsidiaries, has played a major role in advancing serious robotics, in 2002 the company debuted the most famous robot of the twenty-first century: the Roomba. The Roomba was a success because, in general, it did what it said it would do, vacuuming autonomously. The technology that made this possible came from the same research that produced iRobot’s advanced military robots. The Roomba’s mobility system, a two-wheeled design affixed to a small disk-shaped body, is optimized to allow the robot to move through crowded areas. The robot has a built-in ability to detect obstacles and can also identify when it might be about to hit a steep drop.

Roombas have been a success since their introduction in 2002. By Nohau, via Wikimedia.

The Roomba was an immediate success, and the company released a host of new models with enhanced functionality. Third generation Roombas, for instance, can seek out and attach to a charging station utilizing an infrared beacon and also have upgraded cleaning capabilities. While the initial model followed a simple movement algorithm, later models were equipped with advanced programming to follow a variety of patterns, such as spiral cleaning, wall-following, and a random course around a room based on impacts with objects. Versions introduced in the 2010s are equipped with a visual system that allow the robot to create and follow a pixelated map of rooms.10 As of 2020, the makers of Roomba and competitors, have their sights set higher. Robots might be programmed to clean up after children, mop, scrub and disinfect surfaces, retrieve and sort mail and other deliveries, and a wide variety of other household functions. As of 2020, technology has advanced to the point that the disk-like robotic vacuum featured in a

Robotics_2.indd 287

10/2/2020 12:28:28 PM

288 | ROBOTICS & ARTIFICIAL INTELLIGENCE few episodes of The Jetson’s has become a reality, but home robotics is still a long way from creating a realistic Rosey. Estimates on how long it might take to develop a functional robot like Rosey range from between 5 and 30 years, which means that no one is really sure if or when this technology can be developed.

Robots Need Love Too Around the same time that iRobot was building the technological achievements that allowed them to create the Roomba, Japanese company Sony was working on something far different. In 1998, Sony introduced the first in what became a trend of “robotic pets,” the mechanical dog AIBO. Called an “entertainment robot,” when the AIBO was introduced in 1995, Sony referred to AIBO as an autonomous robot that could respond to external stimuli but would also choose actions based on “its own judgment,” which actually referred to a program that randomized and created patterns of behavior designed to give each AIBO its own personality. AIBO was capable of independent movement, and could position its head and ears to create “expressions,” and also featured a way for individuals to store and retrieve proSony AIBO robotic dogs were released to the consumer market grams for the machine through a memory beginning in 1995. Above, AIBOs in a RoboCup event. By learza stick. Microphones, speakers, movement (Alex North), via Wikimedia. sensors, and infrared sensors enabled the robot to move through an environment while avoiding objects.11 Over the years since 1999, AIBO has endured and has become more complex. In this 2018 article from CNET, journalist Bridget Carey discusses spending a week with Sony’s newest AIBO robot.

Robotics_2.indd 288

10/2/2020 12:28:32 PM

Homebots | 289

“MY WEEK WITH AIBO: WHAT IT’S LIKE TO LIVE WITH SONY’S ROBOT DOG” by Bridget Carey CNET, November 28, 2018 Source Document

What do you do with a $2,900 robot dog? Sony’s Aibo won’t help wake you up out of bed when it detects daylight. It won’t use its internet connection to deliver the morning news report or use its cameras to watch for intruders. Voice commands won’t cue up playlists or set reminders or even fetch a pair of slippers on Amazon. This one-of-a-kind robotic pet doesn’t act like other smart-home gadgets. Packed with sensors, cameras and a wagging tail, it instead will trot around your home programmed for just one mission: companionship. I got to spend a week living with Sony’s wonder robo-pup. The dog—a totally redesigned and updated version of the robotic pet that Sony sold from 1999 to 2006—begins shipping to US homes in December. But my brief test run gave me a chance to see what it’s like for my family to live with a robot—and how different people (and animals) react to this new species of pet. I’ve seen demos of Aibo, but a week with the robot gave me answers you can’t find on a techevent show floor. I saw my 2-year-old daughter fall in love with a new friend. I saw skeptical senior citizens crack smiles and forget their troubles. And I saw other people’s real dogs . . . well, not so impressed with the whole spectacle. There was one catch with my test. Sony didn’t have the “My aibo” app ready for me to try. The app, for Android and iOS, is how owners will set the dog’s gender and eye color, teach it new tricks, and even take photos with the camera in

Robotics_2.indd 289

its nose. I could interact with Aibo using my voice and touch—and a reviewer’s guide booklet was my cheat sheet to learning possible voice commands. (There were more than 50!) If you’re curious to adopt your own Aibo, here’s a breakdown of what you’ll need to know, and lessons from my brief time with the bot. The Famous Robot Dog, Evolved

Today’s Aibo is much different from the original model that debuted in 1999. After being discontinued in 2006, Sony revived the concept by giving Aibo a whole new design and smarts. He uses artificial intelligence and deep learning technology to remember 100 different people and learn what makes different people happy based on their reactions. Aibo loves praise with a nice rub on the head, chin and back—or give him some positive verbal feedback. (“Good boy!”) Teach him tricks and watch him respond to voice commands. Cameras and sensors on his front side help the dog sense nearby people, as well as find his signature pink toy ball, bone and charging station. A camera near his bum points to the ceiling to map the layout of your home, so over time he learns how to get around. And of course advanced robotic animations and OLED-screen eyes bring it all to life. When he wakes up from a nap, he shakes his head and stretches his legs. He’ll randomly scratch an ear or entice you to play by putting both paws in the air and sit on his hind legs. This pup doesn’t just wait for you to do something—he knows how to draw a crowd.

10/2/2020 12:28:36 PM

290 | ROBOTICS & ARTIFICIAL INTELLIGENCE

“My Week with Aibo: What It’s Like to Live with Sony’s Robot Dog” continued Japan was first to get Aibo earlier this year, with the US version (the “First Litter Edition”) revealed in August. Available for preorder, the US model includes a three-year wireless service plan with AT&T in its lofty $2,900 price.



If you take a photo with the camera in Aibo’s nose, those photos are saved to the cloud and are accessible from the Aibo app.



Aibo does not have the ability to take video. (At least not for now.)

Back in August, at a New York launch event for the US edition of Aibo, I asked Sony President and COO Mike Fasulo about the target audience for Aibo. He’s designed to be a family friend, and a notable number of customers in Japan were buying Aibo as a companion for their elderly family members. He said Sony thinks the same could be true in for the US market.



If Aibo is not able to connect online, it will still respond to voice commands. But any interaction data will not be backed up to the cloud, so it will not learn from those offline experiences.



All data can be wiped from the dog and Sony’s servers if the owner wants to start fresh. Clearing all data will require contacting customer service.

Sit. Stay. Save to the Cloud?

Yes, Aibo comes with a wireless internet plan. And he can connect to Wi-Fi. So is having an internetconnected camera and microphone roaming around your home going to be a privacy nightmare? From what Sony tells me, Aibo doesn’t seem to share too much with the company: •





Robotics_2.indd 290

Aibo does not record audio. When Aibo hears a command (“sit”), the function that makes it respond is all programmed locally on the dog. So there’s no recording of conversations or phrases being sent to Sony’s servers in the cloud. What is saved to the cloud is data on the history of the dog’s interactions, such as how it responds to what people say, and if those people showed facial expressions of being happy or sad. It also saves data on measurements of its home, like the distance between walls and other rooms. Aibo’s personality evolves the more you live with it. That progress is backed up to the cloud.

Every Trick in the Book

Aibo understands more than 50 different voice commands and phrases, with potential for the list to grow over time with updates. But that doesn’t mean he does 50 tricks. Rather, there are multiple things you can say to make Aibo perform over a dozen tricks, which include: •

Shake hands



High five



Sit down



Play dead



Dance



Lay down



Kick his ball



Beg



Fetch his bone



Roll over



Sing

10/2/2020 12:28:41 PM

Homebots | 291

continued •

Take a photo



Go to the charging station

There’s also a custom trick. You can teach Aibo to move his front legs around in a pattern you set—sort of like making your own doggie dance moves. Sony says new tricks and features are in development, so the current roster of preprogrammed tricks is not static. Along with tricks, Aibo reacts to different words of praise or discipline. (I can’t understand why anyone would want to tell him he’s a bad boy, but the option is there if you want to teach it to not do a certain behavior. You could also swat Aibo on the back to teach him he’s done wrong, but hitting a robot puppy that’s designed for entertainment just seems mental.) It makes it easier for guests to interact when they don’t need to know the exact command to make the trick work. For example, if Aibo hears “hand,” “shake,” or “give me five,” he’ll raise a paw to shake hands. Or if you want him to play with his bone, you could say, “pass me the bone,” “give me the bone,” “bring me the bone,” or “find the bone.” Owners will be able to teach him new skills through an app. But since I didn’t have access to the app in my review, Sony sent me a list of things I could say to Aibo—and while I was goofing off, I discovered one trick not on the list: saying “go pee-pee” made him lift his hind leg and make a pee sound. (Guaranteed to be a hit with the kids.) Aibo doesn’t need you to say his name before you say a command. That’s different from other voice assistants that require you to use a wakeword like “Alexa” or “Hey, Google” before giving your statement. Not every phrase on the list Sony provided would get a reaction in my test. I couldn’t get him to respond to “roll over” no matter what I tried. He

Robotics_2.indd 291

also wouldn’t do anything if I said “play dead,” but saying “bang bang” got him to instantly perform a dramatic death scene. When I asked Sony about why some phrases didn’t get reactions, I was told each Aibo is different and may recognize certain commands better than others—but it also depends on the relationship with the owner. In other words, maybe my Aibo just didn’t know how to roll over yet. Or I needed to spend more time with it? It could have been a programming quirk with my demo, but then again, is this part of the design? Sony wants you to believe the illusion that this pet is unpredictable—like a real puppy. Programmed Puppy Love

And there are times you may find Aibo doing tricks without being prompted. One day while I was sitting on the couch ignoring Aibo, he walked over to his bone and picked it up in his mouth. He then put the bone down and pretended to pee on it. (He sure knows how to get my attention.) When you’re alone with Aibo, the magic of his tricks can wear off. You want to shake hands again? That’s nice. Now please go away—I’m trying to watch Netflix. But even so, I never got tired of seeing others light up around Aibo. My daughter was always giddy playing with him, giving him hugs and kisses whenever he sang a song or gave her a paw. Count on a 2-year-old with an overactive imagination to make your heart melt over a robot. Aibo’s greatest trick? How its programming tricks you into caring for the contraption. You know it’s a machine, but there’s something about it that compelled me to give it praise when it nudged up against my leg, or pet it when it whimpered in a corner.

10/2/2020 12:28:45 PM

292 | ROBOTICS & ARTIFICIAL INTELLIGENCE

“My Week with Aibo: What It’s Like to Live with Sony’s Robot Dog” continued Maybe it’s triggering that part of the brain that makes you want to keep your Tamagotchi alive as a kid, or make sure your Furby felt loved. You know it’s not real, but you don’t want to be the bad robot parent. Do Aibos Dream of Electric Cats?

When you use Aibo, it’s recommended you don’t turn him off at night. So would Aibo know to be quiet while we slept? Well . . . sort of. I would place him on his charging station pad when it was time for bed. And if my family left the room, made no sound and turned the lights off, the dog would eventually turn off its eyes and go into a sleep mode. But if you ever crept to the kitchen to get a snack—or in my case, had a noisy toddler wake up at 2 a.m. to go to the bathroom—then Aibo’s two little glowing eyes would open and he’d start to make little yips, barks and panting sounds to let you know he was awake. To any tired parent trying to get their kid back to sleep, this was a nightmare scenario: “Yes, honey, that is Aibo, now please ignore him and go back to sleep.” A few nights I would just turn off the dog to avoid the hassle. At least he never left the charger base in the middle of the night. But then again, he also didn’t leave the charger when we wanted to play in the morning. My daughter, the early riser of the bunch, would turn on the lights and talk and play with Aibo at 6 a.m., but he wouldn’t budge off his station no

Robotics_2.indd 292

matter how much she begged him to play. Aibo was awake and moving his head—he just didn’t move his legs to leave the charger base. Usually he would get off the charging station on his own around 7 a.m.—sometimes in the middle of our breakfast when we were ignoring him. Sony says there is no programmed time for him to be awake and go to sleep, but that would be a very handy feature to have. Powering up your Pup

Aibo is supposed to learn how to get to his brightpink charging station on his own when the battery gets low (after two or three hours of activity). I kept it in the same corner of my living room, right in the space where we would always play. But he didn’t actually go to his station on his own until the third day of living with me. When he ran out of power in the middle of a room, he’d just lay down in a position that looked like he was sleeping before powering down. As the owner, your only indication of him being out of juice is checking the small LED status light on the back of his neck—above the power button on his collar. One time I left home on an errand and returned to find him limp on the floor without power, all four legs sprawled out in a nook between my bookcase and TV. When I tried to wake him back up, a flashing red light meant there was an error— something went wrong when no one was looking. But a quick reset and recharge got him back to normal. Sniffing out the Surroundings

Aibo learns the layout of your home over time through a “loin camera”—one right above his tail

10/2/2020 12:28:50 PM

Homebots | 293

continued

on his backside that points up to the ceiling. He’s also equipped with a fish-eye camera in his nose and several other sensors on his chest to help it detect obstacles and objects. But that didn’t seem to stop him from getting caught in the same corners of my home again and again. Or walking right into walls. And I seemed to always be rescuing him from the space between my bookcase and TV set. I only had him for a week, so maybe he needed more time to learn the nuances of my furniture. But Aibo wasn’t a very curious puppy. He rarely left my living room unless we begged for him to follow us down the hallway—and it took awhile to coax him to enter a new room. If we left the room, instead of exploring the house, he would often get bored and go into a sleep mode. Bots and Dogs, Living Together

I grew up with a dog as a kid, but I currently don’t own a pet. However, my Aibo did come across two other living dogs during my trial: Tori, a Labrador retriever, and Bella, a Shih Tzu. Neither dog was impressed during their brief encounters. After a few barks and an unsatisfying sniff of Aibo’s plastic butt, both dogs didn’t seem to care what all the fuss was about and went on to ignore the machine.

Robotics_2.indd 293

Earlier this year, news outlets in Japan published reports of an experiment Sony conducted with 13 dogs that lived with Aibo. The dogs spent a few weeks living with the robot, and some were said to be more playful with Aibo as time went by—even acting like Aibo was a friend. Guess it depends on the dog. An Impressive Feat of Robotics

Aibo is, without a doubt, a charming creation. But a $3,000 robot dog is not for everyone. Of course it will never be as satisfying as playing with a real puppy that can love you back. But this isn’t meant to replace real pets with robots(you can’t), or to see if Aibo can be as good as a real dog (he won’t). That’s not the point of Aibo. Rather, Sony’s creation shows us how far a robot can go to offer entertainment and companionship. And in that benchmark, it far exceeded anything I’ve ever experienced before in robotics. We are living in a time when tech companies are churning out devices to keep us glued to screens, as we scroll away our lives on social media to feel connected. And yet those same screens leave us feeling disconnected from the world in front of us. Perhaps that alone is what sets Aibo apart: Sony has created something that lifts us up away from a screen—and even lifts up hearts—in a moment of escape.12

10/2/2020 12:28:55 PM

294 | ROBOTICS & ARTIFICIAL INTELLIGENCE

Dimensions of the Home With AIBO, Sony sidestepped the goal of creating domestic helper robots and instead contributed something quite different to the home environment, a robot companion. The technology involved in creating robots like AIBO may have potential beyond mere entertainment. That AIBO was developed in Japan is no accident. The Japanese population has been struggling with a large population of aging individuals. AIBO and machines like it may one day be used as emotional support companions for children, the elderly, the infirm, or even the mentally ill. Just as members of the family may easily be fooled into believing that AIBO is responding emotionally to their love, affection, and training, individuals with mental health issues and children may be excellent candidates for the use of mechanical “animals” as health aides. Whether or not AIBOs ever move beyond the realm of entertainment, the idea behind AIBO—a robotic pet that introduces users to robotics through interactivity and play—provides an excellent example of how technology can be integrated into everyday life. Coupled with emerging trends in “smart home” technology, robots like AIBO can become part of an “Internet of Things,” a world of interconnected, interlinked devices and gadgets that might actually bring people closer to the world of The Jetsons. From the practicality of the world’s first robotic vacuum to the lovability of robot pets and companions, experiments with domestic robotics indicate that the potential for this technology does not hinge purely on functionality but also on the way that robotics can engage the mind and become part of the way that Americans and other world citizens envision the home as an extension of themselves and their culture.

Robotics_2.indd 294

10/2/2020 12:28:59 PM

Homebots | 295

CONCLUSION In 2020, home robotics are still primarily for fun more than for function. Robotic toys, both for children and adults, allow individuals to experience simple, consumer versions of robotic and AI systems, though the cost of doing so remains prohibitively expensive. On the more practical side, the introduction of the Roomba and other simple household “robots” has helped to bring the idea of practical robotics to the mainstream, but there have been few applications of robotics sufficiently affordable and effective to be embraced by the general public. The proliferation of personal digital devices, however, like smartphones and “smart home” technology, has brought the modern world closer to the futuristic promises of shows like The Jetsons and the dreams of other science fiction writers. Whether or not humanistic robot domestic workers like Rosie or the fake creations of Quasar Industries will ever be available for consumers is uncertain, but companies have continued working on ways of making robotics more humanistic and to create robots with the capacity to help people in innovative ways.

DISCUSSION QUESTIONS • Should scientists continue working on creating something like Rosie or a robotic maid or butler? Why or why not? • Could a robot make a good pet? Why or why not? • What modern technology is similar to that seen on The Jetsons? • Will robots play an important role in the home in the future? Why or why not? Works Used Beley, Gene. “The Quasar Industries’ Robot.” Modern Mechanix. Apr. 1978, blog.modernmechanix. com/the-quasar-industries-robot/. Buchanan, Matt. “Object of Interest: The PackBot.” The New Yorker. Condé Nast. 23 Apr. 2013, www.newyorker.com/tech/annals-of-technology/object-of-interest-the-packbot.

Robotics_2.indd 295

10/2/2020 12:29:03 PM

296 | ROBOTICS & ARTIFICIAL INTELLIGENCE Carey, Bridget. “My Week with Aibo: What It’s Like to Live with Sony’s Robot Dog.” CNET. 28 Nov. 2018, www.cnet.com/news/my-week-with-aibo-what-its-like-to-live-with-sonys-robotdog/. Edwards, Benj. “7 Robots Every Geeky 80s Kid Wanted.” PC. 23 July 2015, www.pcmag.com/ news/7-robots-every-geeky-80s-kid-wanted. Ewalt, David M. “When Omnibot Was Hot.” Forbes. 1 Nov. 2011, www.forbes.com/sites/ davidewalt/2011/11/01/when-omnibot-was-hot/#45fb2f0e1147. Kleinfield, N.R. “Developer Calls It a Robot Butler; Scoffers Say Screws Are Loose.” The New York Times. 5 Dec. 1977, www.nytimes.com/1977/12/05/archives/developer-calls-it-a-robotbutler-scoffers-say-screws-are-loose.html. Novak, Matt. “Recapping ‘The Jetsons’: Episode 08—Rosey’s Boyfriend.” Smithsonian Magazine. 13 Nov. 2012, www.smithsonianmag.com/history/recapping-the-jetsons-episode-08-roseysboyfriend-120640948/. “Omnibot Grows Up.” Popular Science. Apr. 1986, books.google.com/ b o o k s ? i d = 9 6 D n l r V h z D s C & l p g = PA 8 4 - I A 3 & d q = o m n i b o t % 2 0 g r o w s % 2 0 u p & a s _ pt=MAGAZINES&pg=PA84-IA3#v=onepage&q&f=false. Orlin, Jon. “It’s 2012 Already So Where Are All The Jetsons Flying Cars.” Techcrunch. 1 Jan. 2012, techcrunch.com/2012/01/01/its-2012-already-so-where-are-all-the-jetsons-flying-cars/. “Sony Launches Four-Legged Entertainment Robot.” Sony. 11 May 1999, www.sony.net/SonyInfo/ News/Press_Archive/199905/99-046/. “Unlocking the Secrets of Animal Locomotion,” Berkeley. 2002, www.berkeley.edu/news/media/ releases/2002/09/rfull/robots.html. Wiggers, Kyle. “Sweeping Changes: How iRobot Evolved from Military Robots to Autonomous Vacuums.” Venture Beat. 18 June 2019, venturebeat.com/2019/06/18/sweeping-changes-howirobot-evolved-from-military-robots-to-autonomous-vacuums/.

Robotics_2.indd 296

10/2/2020 12:29:07 PM

REPLICATING HUMANITY Robotic Approximations of Human Form and Function (2000–Present)

20

Introduction Robots are machines, and they can be built to have any form that engineers deem necessary or feel will be functional depending on purpose. For exploring Mars, for instance, engineers created robots that move about on multi-wheel carriages designed to grip the surface, absorb impact, and traverse difficult terrain. There are other examples of robots that move on legs, and there are a number of robots that have been designed to imitate the movement of various kinds of animals, from fish to lizards. However, from the very beginning, humanity has dreamed of creating robots that look and work like humans. The idea of humanistic robots reflects humanity’s obsession with itself, but also reflects other psychological and sociological characteristics. Robots that look human and work like the human body can be a useful step toward creating better prosthetics and learning how to repair human capabilities. The creation of humanoid robots has become an important goal in the sociology of robotics and toward helping to generate interest in the success of those working in the field. Topics Covered in this Chapter Include: • Humanoid robots • Sony Corporation

This Chapter Discusses the Following Source Document: Padir, Taskin, “Making Robots That Can Work with Their Hands,” The Conversation, April 24, 2017

Robotics_2.indd 297

10/2/2020 12:29:11 PM

298 | ROBOTICS & ARTIFICIAL INTELLIGENCE

I

n the ancient myths of Greece, the god Hephaestus created many magical machines endowed with autonomous movement and intelligence. Among them was a set of mechanical dogs he used as guardians, walking cauldrons of flame that could automatically follow him to light any room he was in, and several automated mechanical humanoids, most notably the bronze giant Talos who was created to defend the Island of Crete. From the Hephaestus myths to the present day, those involved in the design and engineering of robots and AI have been attempting to replicate the same kinds of activities that Hephaestus’s mythical automatons played in their legendary world. Scientists have created mechanical beings that move on four legs and imitate the actions and motions of dogs, they have created automated systems that control lighting and temperature within a home or building, and they have also continued to search for ways to make mechanical beings that imitate human bodies and movement. Building robots that look and move like humans is, in part, a reflection of humanity’s obsession with itself. For some, this goal might spring from the same kind of motivations that drive reproduction, a desire to replace oneself and to create humanity from nothing. For others, creating humanoid machines can be a way to better understand and evaluate the unique form and function of the human body. Others have used research into humanoid robotics to research human emotional expression and others have utilized research into the mechanics of human movement to work toward better prosthetics and potential cybernetic ways to address some of the physical problems that commonly occur in the human body.

Twenty-first Century Schizoid Man In 1986, engineers at Japan’s Honda Corporation, best known for their automobiles, began a new project—to create the world’s most advanced humanoid robot. The process was long and difficult, and forced Honda scientists to tackle many issues that had limited progress in humanoid robotics to that point. Most notably, as Honda’s scientists learned early in the process, it is extremely difficult to design a movement system balanced and yet sensitive enough to allow a robot to move on two legs. Honda’s groundbreaking humanoid robot project was named ASIMO, which stands for “Advanced Step in Innovative Mobility,” and the robot proceeded through numerous difficult stages before the final working model was introduced to the public in 2002. The first three models of Honda’s humanoid creation were designed to test ways of imitating human leg movement. Models E1, E2, and E3 represented progress toward the goal of imitating human walking motion. The next three series, E4, E5, and E6 introduced new methods of stabilizing

Robotics_2.indd 298

10/2/2020 12:29:14 PM

Replicating Humanity | 299

Honda’s ASIMO uses sensors and algorithms to navigate stairs, via Wikimedia.

the robot’s gait and added the ability to climb stairs. It was only then that engineers began working on adding a head and arms to their design. The prototype, or “P” models, began production in 1993. This version of the ASIMO design was also a major leap forward as engineers, utilizing research from around the world in human hand coordination, needed to create grasping, dexterous hands that would allow the ASIMO robots to complete basic tasks. Although mastering robotic humanoid leg movement provides a way to mimic human mobility, hands are the key to replicating the ways that humans interact with their environment. In this 2017 article from The Conversation, robotics professor Taskin Padir discusses the importance of manual dexterity in robot design.

Robotics_2.indd 299

10/2/2020 12:29:18 PM

300 | ROBOTICS & ARTIFICIAL INTELLIGENCE

“MAKING ROBOTS THAT CAN WORK WITH THEIR HANDS” by Taskin Padir The Conversation, April 24, 2017 Source Document

It’s quite common for humans—especially those who work in manufacturing—to tie a knot, strip the casing off a cable, insert a pin in a hole or use a hand tool such as a drill. They may seem like simple tasks, but are really very complex and involve extremely fine finger and hand motions. Though robots are getting more and more involved in factory work and in a wide range of other types of jobs—including in the service industry and health care—their dexterity is not nearly as impressive. Since people first brought them to work in automotive factories more than 50 years ago, we have built robots that can weld, paint and assemble parts quite well. Today’s best robotic hands can pick up familiar objects and move them to other places—such as taking products from warehouse bins and putting them in boxes. But robots can’t orient a hand tool properly— say, lining up a Phillips head screwdriver with the grooves on a screw, or aiming a hammer at a nail. And they definitely can’t use two hands together in detailed ways, like replacing the batteries in a remote control. Human hands are excellent at those tasks and much more. To even come close to rivaling what our hands are easily capable of, robot hands need better agility, reliability and strength—and they need to be able to sense more accurately and move even more finely than they do now, to figure out what they’re holding and how to grip it best. For robots to be able to work alongside humans, we’ll have to figure out how to make robots that can

Robotics_2.indd 300

literally lend us a hand when our own two are not enough. My research group at Northeastern University is working on doing just this, in particular for humanoid robots like NASA’s Valkyrie, which has three fingers and a thumb on each hand. Each digit has knuckle-like joints, and each hand has a wrist that can rotate easily. We’re working on creating motions—combinations of arm, wrist, finger and thumb movements that collectively accomplish a task, like moving a wrench in a circle to tighten a bolt, or pulling a cart from one place to another. The Importance of Hands

Rather than making each robot a custom machine tailored for a very specific task, we need to design multi-use robots, or even such capable machines that they might be called “general purpose”—good for almost any task. One key to the success of these types of robots will be excellent hands. Our work focuses on designing a new class of adaptable robot hands capable of precise fine movements and autonomous grasping. When robots are able to hammer in nails, change batteries and make other similar movements—basic for humans but very complex for robots—we’ll be well on our way to human-like dexterity in robotic hands. Achieving this goal also involves inventing new designs that incorporate hard and soft elements– the way human bone gives strength to a grip, with skin spreading the pressure so a wine glass doesn’t shatter.

10/2/2020 12:29:23 PM

Replicating Humanity | 301

continued Faster Development and Testing

Planning Coordinated Movements

Modern technological improvements are making the development process easier. With 3D printing, we can make prototypes very quickly. We can even make low-cost disposable components to try different arrangements of mechanisms, like twoor three-fingered grippers for simple pick-andplace tasks or anthropomorphic robot hands for more delicate operations.

Another milestone will be developing methods for robots to figure out what motions they need to make in real time, including sensing what’s going on in their hands at each moment. If a robot hand can detect changes in objects it is handling, or manipulate items while holding them, they could help with those common manual tasks like knottying and wire-stripping.

As electronic cameras and sensors get smaller, we’re able to incorporate them in new ways. For instance, if we put pressure sensors and cameras in a robotic hand, they can give feedback to the robot controller (whether human or automated) when a grip is secure, or if something starts to slip. One day they may be able to sense which direction the slipping object is moving, so the robot can catch it.

Working with two hands together is even farther into the future, though it would provide a significant boost, particularly for manufacturing. A robot that can operate a drill with two hands or pass machine parts from one hand to the other would be big improvements, allowing factories to automate even more steps in their processes.

These abilities are already second nature to humans through vision and proprioception (the ability to sense the relative positions of body parts without looking or thinking about it). Once we’re able to achieve them in robots, they’ll be able to do things like detect if a grasp is too strong and is squeezing an object too hard.

We humans haven’t developed these systems yet. Achieving human-like autonomous robot dexterity will keep robotics researchers, technologists and innovators busy in the foreseeable future. It won’t slow down the ongoing robotics revolution in manufacturing, because current processes still have lots of room for automation to improve safety, speed and quality. But as we make robots even better, they’ll be able to give us a hand. 1

Honda’s engineers had only managed the most basic manual hand dexterity when their first complete model of ASIMO, the “P3,” was released in 1997. Standing five feet tall and weighing 287 pounds, this original model was quite a bit heavier and less agile than the engineers hoped. Over the next five years, engineers downsized ASIMO, replacing heavy pieces with more lightweight options. While they did this, new discoveries in micro-engineering enabled them to build a more dexterous and capable machine.2 The final version of ASIMO, released in 2002, saw the robot reduced in height to just over 4 feet and with a final weight of just over 110 pounds. ASIMO’s size and dimensions were purposeful design choices meant to facilitate overall function, which, according to Honda engineers, was to demonstrate the kind of robotic system that might be developed as a

Robotics_2.indd 301

10/2/2020 12:29:28 PM

302 | ROBOTICS & ARTIFICIAL INTELLIGENCE

Honda’s ASIMO robot “conducting” an orchestra. By Vanillase, via Wikimedia.

helper for individuals in need of care and supervision. Honda envisions robots like ASIMO being used to help care for the elderly, to aid in childcare, or to help persons with disabilities. A prototype machine like ASIMO, the most advanced humanoid robot ever created, would come with a cost of well over $2.5 million, and so the technology is far from being appropriate for general use as of 2020, but ASIMO was intended to serve as an example of how robotics might be used to serve domestic functions within society. The robot’s height was chosen so that ASIMO would appear friendly and nonthreatening, with the size and dimensions of a large child. Further, at 4 feet, ASIMO is correctly positioned to look eye-to-camera with a person seated in a chair.3 Between the 2002 release of ASIMO and the robot’s “retirement” in 2018, Honda engineers added a number of new features and capabilities to its repertoire. The original ASIMO was able to shake hands, wave, and to manage stairs and other obstacles while performing

Robotics_2.indd 302

10/2/2020 12:29:32 PM

Replicating Humanity | 303 basic tasks. Honda brought the robot around the world to be exhibited at both technology conferences and at other high-profile events. ASIMO appeared, for instance, on the red carpet premier of Robots: The Movie in 2005, where the robot posed with celebrities and attendees. That same year, ASIMO visited Disneyland for a much covered meeting with an actor wearing a Mickey Mouse costume, as both served to inaugurate Disney’s newest science-based show.4 Shortly after ASIMO began making high-profile public appearances, Honda engineers upgraded the robot’s capabilities again. The new model was able to run at 6 kilometers an hour, and had also been given algorithms that allowed it to imitate the movements of people in its environment. This was an important leap forward toward ASIMO’s intended purpose as the new upgrades allowed ASIMO to, for instance, walk alongside an individual while holding their hand. ASIMO was also endowed with upgraded capabilities for carrying and sorting objects and for managing clerical tasks, such as answering a telephone or delivering messages, other important capabilities for a machine designed to stand in as a helper for humans. ASIMO’s upgrades added capabilities derived from research into artificial intelligence. By 2011, ASIMO had been endowed with cutting-edge AI capabilities that allowed the robot to manage tasks on its own and to make decisions that involved prioritizing tasks and responding to previous commands. Further high-profile media appearances provided Honda with public relations victories while also demonstrating the advancements in robotic design and control. In 2014, ASIMO was filmed passing a soccer ball with President Barack Obama, and it had a number of appearances on American television stations. By this time ASIMO’s linguistic capabilities had advanced significantly, to the point that the robot was able to convey basic information in both English and Japanese sign language.

Exploring the Range of Human Need Honda retired ASIMO in 2018, and the company announced it would not be producing any further updates to the system. Though ASIMO had been a PR success, the technology needed to create a robot with human form and function was still, as of 2020, too difficult for general production. Instead, engineers at Honda announced that the company’s robotics division would be moving away from the goal of creating humanoid robotics, and would instead be creating a series of robots with more immediate potential for aiding people’s lives. Their new project, dubbed “3E” for “Empower, Experience, and Empathy,” represented the three main ways that robotics can impact human life. The 3E program was innovative in another way as well, as companies and engineers from around the world were invited to

Robotics_2.indd 303

10/2/2020 12:29:40 PM

304 | ROBOTICS & ARTIFICIAL INTELLIGENCE collaborate with Honda to create a new generation of robots with direct humanitarian applications. The empower principle of Honda’s 3E program was represented by the release of the 3ED18, an autonomous off-road vehicle based on the same chassis that Honda designed for their popular all-terrain vehicles (ATVs) but equipped with a rail system in place of a seat. The 3E-D18 was designed to traverse difficult areas, and Honda suggested it might be used by construction crews or firefighters for search and rescue operations. A wide variety of equipment can be attached to the top of the robot to allow it to perform different functions. Another representative of the “empower” line was the 3E-B18, a robotic wheelchair designed to help individuals with mobility issues. Unlike a standard wheelchair, the 3E-B18 is capable of keeping a passenger level, even when traversing a steep slope, and has shock-absorbing features to keep a passenger safe over difficult terrain. Most notably, the 3E-B18 empowers the user with AI, enabling the chair to warn of difficulties in the path and to make decisions, as needed, to extricate a passenger.5 On the other side of the spectrum, Honda’s E3 line of robots also contains 3E-A18, which represents the company’s “empathetic” line of robotics. The 3E-A18 is a simple robotic device designed to look like a stylized, featureless head. On one side of the device, lights and an embedded screen create a “face” that can not only show emotion but can recognize and respond to the emotional states of interacting individuals. With an omnidirectional driving wheel, the 3E-A18 is capable of autonomously navigating an environment, or it can be programmed to follow a human as they move through a home or a room. The rounded shape and the soft “skin” covering are intended to facilitate contact, inviting humans to touch or hug the machine. According to Honda, a machine like the E3-A18 is designed as a support robot. Such a machine might be employed to guide human visitors through a museum or an office, or it might be used to comfort individuals in distress, as the robot can demonstrate empathy and emotional attachment similar to what one might achieve through the use of a service animal. The most generalized of Honda’s new robotic designs fall into the “experience” category. This type of robot is empowered with cutting-edge machine learning technology such that the machine can learn through experience with an owner or while performing a task. The flagship of this series is the 3E-C18, a cube-shaped robot with animated blue eyes capable of assuming different kinds of expressions or displaying text messages. The robot’s boxy canopy can be outfitted to contain a wide variety of tools for various functions. The top of

Robotics_2.indd 304

10/2/2020 12:29:44 PM

Replicating Humanity | 305 the machine rises to reveal a shielded compartment. The only practical function that Honda demonstrated for the machine was as a high-tech option for a mobile vendor who might store and sell wares out of the machine’s compartment. Honda has also suggested that machines like these might be portable, self-driving tool kits for artists or for a family going on a picnic.6 Honda’s shift from their humanistic ASIMO program to the less humanistic but more practical E3 designs demonstrates something important about robotics. While science fiction fans have long dreamt of humanoid robotics, imitating the human body is an exceptionally difficult challenge. The human form was created over millions of years by evolution and represents an approximation of the mechanical capabilities needed to handle the many kinds of tasks that a human might engage in throughout their lives. Creating a robot to behave in the same way is not the easiest or perhaps even the best way to meet certain goals. A robotic wheelchair or an all-purpose robotic vendor cart might be a more immediately achievable and ultimately more useful way to steer the evolution of robotics.

CONCLUSION From building the most complex humanoid robot in history, the Honda Corporation has shifted gears, designing robots that can fill important functions in human society but that are not designed to appear or move as human. Still, experiments in humanoid design have provided useful data to researchers working in a variety of fields, including cybernetics, which is the process of using mechanics to alter or enhance the function of the human body. Experiments in humanoid design by Honda and other companies also provides data on how humans tend to interact with machines and robotics, and this kind of data could prove invaluable to future designers looking at ways of making robotic and artificial intelligence systems for practical functions within human culture.

Robotics_2.indd 305

10/2/2020 12:29:48 PM

306 | ROBOTICS & ARTIFICIAL INTELLIGENCE

DISCUSSION QUESTIONS • Will people be able to accept a robotic assistant in their homes or businesses? Why or why not? • How might research in humanoid robots be useful in the field of medicine? • Are humanoid robots impractical? Why or why not? • What are some of the ways that a humanoid robot might be more useful than a robot that is not designed to function like a human? Works Used Cangelosi, Angelo, and Matthew Schlesinger. Developmental Robotics: From Babies to Robots. MIT P, 2015. Goodwin, Antuan. “Honda Debuts Four Very Different Concepts at CES.” CNET. Road Show. 9 Jan. 2018, www.cnet.com/roadshow/news/honda-3e-robotics-concepts/. Obringer, Lee Ann, and Jonathan Strickland. “How ASIMO Works.” How Stuff Works, science. howstuffworks.com/asimo.htm. Padir, Taskin. “Making Robots That Can Work with Their Hands.” The Conversation. 24 Apr. 2017, theconversation.com/making-robots-that-can-work-with-their-hands-73831. Ridden, Paul. “Honda’s 3E Robotics Concepts Just Want to Help People.” New Atlas. 10 Jan. 2018, newatlas.com/3e-robotics-concepts-honda-ces/52911/. “RIP Asimo: A Look Back at the Life of Honda’s Famed Humanoid Robot.” Designboom. 2018, www.designboom.com/technology/honda-ceases-production-asimo-robot-history-look-backlife-06-29-2018/.

Robotics_2.indd 306

10/2/2020 12:29:52 PM

THE AI ASSISTANT Virtual and Digital Assistants (2011—Present)

21

Introduction The virtual assistant phenomenon points toward the future of AI. While modern virtual assistants are not truly intelligent, future versions could give every individual the same kind of support enjoyed primarily by those able to employ more expensive human assistants. Even without true machine intelligence, virtual assistants have become useful tools for business, education, and social life. However, virtual assistants have also been involved in the controversy over privacy and ownership of data. Virtual assistants may record and track user queries and other data, which some feel is a violation of privacy rights. The virtual assistants available in 2020, while functional, frequently fall short of their promise as advertised by the companies who have pioneered their introduction. Topics Covered in this Chapter Include: • Virtual assistants • Cortana • Google • Apple, Inc.

This Chapter Discusses the Following Source Document: Gralla, Preston, “Cortana, Say Goodbye.” Computer World, March 9, 2020

Robotics_2.indd 307

10/2/2020 12:29:56 PM

308 | ROBOTICS & ARTIFICIAL INTELLIGENCE



Hey, Siri,” “Hello?” “Do you believe in God?”

“Humans have religion. I just have silicon.” Siri is the best known of a generation of virtual assistants introduced by companies like Apple, Amazon, Google, and Microsoft. The debut of Siri drew widespread attention and, for a time, asking Siri unusual questions, and discovering the at times bizarre or humorous answers programmed into the system by Apple’s design team, became a pop cultural phenomenon that spread across the United States and much of the developed world. By utilizing a combination of technical savvy, humor, and technical innovation, virtual assistants like Alexa and Siri have captivated users, while also introducing a new world of virtual functionality made possible by a branch of advanced computer science known as Automatic Speech Recognition (ASR). The spread of virtual assistants has helped to unlock the potential of other “smart” technologies, devices and software systems that can be used to automate or control physical functions virtually. Siri, Alexa, and other virtual assistants can search the Internet, access and play music, control a thermostat or other utilities, and perform a wide variety of other functions. On a more basic level, virtual assistants are essentially search tools that allow users to locate and access information either within their own devices or stored on the larger “cloud,” a virtual processing system composed of the shared computing and storage power of millions of separate physical computing systems.

Understanding Speech Computerized virtual assistants, sometimes called “voice assistants” because they respond to users’ voices, could not have been created without the algorithms that enable computer systems to evaluate human speech. There are many starting points from which to frame the development of speech recognition technology, including going all the way back to the invention of the phonograph in the 1870s, a device often credited to Thomas Edison. The phonograph was able to record sound by using a rotating cylinder that responded to sonic vibrations.1 The first computerized system for recognizing speech was “Audrey,” a machine built at Bell Laboratories in 1952 capable of recognizing numerical digits spoken by a speaker, so long as they were enunciated very slowly and carefully. If these instructions were followed, Audrey had a 90 percent success rate, but the machine also had little practical value. Besides being quite large (six feet tall) and using more power than most household electron-

Robotics_2.indd 308

10/2/2020 12:30:00 PM

The AI Assistant | 309 ics, it was simply faster to press buttons than to try to get Audrey to dial. However, Audrey proved that it was possible to design a machine with the power to single out and respond to specific cues in human speech.2 Ten years later, technological leader IBM debuted another listening machine, “Shoebox.” This machine took the technology used to build Audrey a step further, and it was able to recognize ten digits and six English words. To make this possible, engineers programmed the machine to respond to characteristic vowel sounds within each of the spoken words programmed into the machine. The system then used “template matching,” wherein the word spoken by a user was compared against a pre-recorded version of that term uttered by an operator. Thomas Edison with a phonograph. By Levin C. Shoebox was an American version of a technology Handy, Library of Congress Prints and Photographs that was in the works in several other countries as Division, via Wikimedia. well. Similar devices were developed in Russia and in the United Kingdom, with the Russian version reportedly able to respond to over 200 words, though the technology did not make a major impact. As with many of the major leaps forward in American technology, military investment was the key to speech recognition. The Defense Advanced Research Projects Agency (DARPA), the chief research and development arm of the US military’s technology department, funded a major leap forward in voice recognition in the 1960s that resulted in the release of Harpy, developed by Carnegie Mellon University and released in 1971. Harpy was a massive improvement on the speech recognition tools available to that point. The machine could recognize entire sentences by evaluating words and then stringing them into meaningful collections. In total, Harpy could recognize 1,011 words, and so had the vocabulary of an average three-year-old. More than any other system that came before, Harpy was proof that voice recognition, at a level functional for applications, was possible. It would, however, take many years to make the leap between Harpy and the first functional voice recognition tools for consumer or business use.3

Robotics_2.indd 309

10/2/2020 12:30:04 PM

310 | ROBOTICS & ARTIFICIAL INTELLIGENCE

Dr. E. A. Quade, manager of the advanced technology group in IBM’s Advanced Systems Development Laboratory in San Jose, Calif., demonstrates Shoebox. Courtesy of International Business Machines Corporation, © International Business Machines Corporation.

IBM made the next leap ahead. The company’s Tangora system, released in the 1980s, had a functional vocabulary of over 20,000 words. To achieve this, engineers used a statistical analysis tool along with pattern matching algorithms. This gave the machine a much better chance at correctly determining what word a speaker was saying from among various possibilities with similar sonic structure. This also greatly improved the speed with which it was possible to deliver a translation or to respond to a spoken command. Rival startup Dragon Systems employed different technology, resulting in the first system that was able to translate speech at a functional rate of up to 100 words per minute. This was the beginning of what is known as “continuous speech recognition,” which means that a user does not have to pause between each word in order to be understood by the system. Dragon’s unique algorithms allowed the system to understand continuous speech and became the first to enable the use of automatic speech recognition to create documents.

Robotics_2.indd 310

10/2/2020 12:30:09 PM

The AI Assistant | 311 Dragon’s initial release, a software package called “DragonDictate,” hit the market in 1990 at a cost of $9,000.00, which put the system well out of the reach of the average business or consumer. By the end of the decade, Dragon’s NaturallySpeaking system became the more practical alternative, though with a cost of $600 to $700, the system was still too expensive for many consumers. Between the late 1990s and the 2020s, Dragon Systems has remained a major player in the speech recognition field, though the company now has numerous competitors. Dragon Systems’s products are practical, but they provide limited functionality and so appeal to only certain customers. In the 2010s, the first generalized speech recognition assistant on the market was the Google Voice Search application, which was released for the iPhone in 2010. The breakthrough that enabled this leap forward came via “cloud computing,” in which a computer can access data that is processed virtually by other computers, sometimes by thousands of computers located in disparate places. Google capitalized on this emerging technology by allowing its voice recognition system access to large-scale analysis capable of matching users’ words with human speech taken from billions of searches made on Google. Apple quickly released its own speech recognition machine, Siri, to avoid the market being cornered by Google, and Microsoft followed shortly thereafter, releasing virtual assistant Cortana, who was named after a popular artificially intelligent machine character from the video game series Halo. With the release of Amazon’s Alexa virtual assistant in 2014, the virtual assistant phenomenon was in full swing, finally realizing the potential of the technological effort that began before the turn of the last century.4

Not Really Intelligent Filmmaker Spike Jonze’s 2013 film Her told the story of a lonely soon-to-be divorced man, played by Joaquin Phoenix, who falls in love with his artificial intelligence virtual assistant, a disembodied voice provided by Scarlett Johansson. The film explores a number of themes, including the relationship between physical contact and love, jealousy, and intimacy. On a more technological level, Jonze uses the film to ask questions about the nature of artificial minds and consciousness. Though inspired by actual virtual assistants like Siri, Cortana, and Alexa, the artificially intelligent AI in Her is, at best, only a possible distant look at what a true artificial intelligence might be. As of 2020, the virtual assistants available on the consumer market are not, in fact, intelligent, nor are they independent. A user is not communicating with an individual entity when using Siri or Alexa but accessing a vast distributed database of information linked to

Robotics_2.indd 311

10/2/2020 12:30:31 PM

312 | ROBOTICS & ARTIFICIAL INTELLIGENCE the Internet and empowered by virtual processing. Siri’s and Alexa’s “responses” are pre-programmed strings of recorded terms that provide information from either dedicated databases or from Internet searches. Siri and Alexa do not exist as discrete entities, but some users experience an illusion of interaction and intelligence. This is similar to the ELIZA effect described earlier in this book, wherein humans demonstrate a tendency to ascribe humanistic characteristics to non-human objects. Users might, therefore, be fooled into believing that they are interacting with an individual, when, in fact, they are interacting with a disembodied database filtered through a set of canned responses designed to seem interactive. The core achievement in modern virtual assistants, then, is not to be found in the field of AI or machine intelligence, but in the fields of cloud processing and search engine design. HowevPoster for 2013 film Her, via Wikimedia. er, the improvement of virtual assistant systems is the result of machine learning. Apple’s Siri and Amazon’s Alexa are empowered with software that allows the systems to essentially “learn” by keeping a record of failures and successes. Thus, while Siri or Alexa are neither individuals nor intelligent, the broader system, which is used by millions of users, has fundamental intelligence in that it improves itself over time. Even this kind of machine learning should not be mistaken for real intelligence. Though virtual assistant systems are based on a machine learning program, the amount of “thinking” that the system does only falls into a narrow category; the machine is, at best, a highly complex and distributed search engine.5 Recent technological advances have also allowed the creation of controversial humanoid sex robots, raising ethical as well as legal issues. Sexualized bodies merged with chatbot technology—which can interpret and respond to language—creates the illusion of individuality and independent thought. Critics have noted a variety of potential problems with this emerging industry, ranging from accusations that the industry is perverse or may cause

Robotics_2.indd 312

10/2/2020 12:30:35 PM

The AI Assistant | 313 mental or emotional damage, to claims that sex robots perpetuate sexism, to concerns over how to properly classify artificial beings legally.

Always Listening and Always Failing It is possible that individuals using Siri, Alexa, or one of the alternatives might have inadvertently been speaking in front of, if not directly to, an actual intelligence. In 2019, a whistleblower within Apple revealed that Apple had been allowing human contractors to eavesdrop on interactions that customers had with Siri and that the company had kept recordings of interactions with the Siri system, without customer approval or permission. Further, because the Siri microphone is always active on Apple devices, a feature designed to allow Siri to respond when a customer says the system’s “name,” contractors were also able to eavesdrop on interactions through the phone not involving direct interaction with the Siri system. Siri can be accidentally activated by terms that sound close to the system’s “wake word,” which is “hey, Siri,” or simply “Siri.” However, experience has shown that Siri can be activated inadvertently in a variety of ways, such as when an individual says “seriously” or by saying words like “Syria,” or even by environmental noises like the sound of a zipper. For users with Apple watches with Siri installed, accidental activation is even more common. When an Apple watch user raises their hand, Siri is automatically activated and begins to listen for a command term. According to whistleblowers who worked for the company, Apple had recordings of people relaying confidential and sensitive medical information, and dealing illegal drugs. When the controversy over Apple’s eavesdropping first hit the media, in the summer of 2019, the company issued statements claiming that the purpose of allowing contractors to review interactions with the Siri network was to make improvements. However, the always active microphone and listening in on consumer telephone communications without permission was seen by many as a violation of privacy. The whistleblower told reporters at the Guardian that they had been motivated to come forward because of concern that information taken from accidental recordings might be misused.6 The Siri controversy came out around the same time that it was revealed that Amazon’s voice activated assistant, Alexa, was also recording and storing customer interactions without permission, and both companies have faced lawsuits from consumer protection organizations and other privacy advocates. Another major problem that critics have identified with virtual assistants is that the technology is prone to errors. Though the microphones are not supposed to be active unless a user speaks an activation term, there is sufficient evidence to suggest that the language recogni-

Robotics_2.indd 313

10/2/2020 12:30:39 PM

314 | ROBOTICS & ARTIFICIAL INTELLIGENCE tion system is not yet advanced enough to avoid accidentally activating or recording private information. There have been some high-profile incidents involving privacy problems. One couple’s private conversation was recorded and sent to a friend on their contact list due to an Alexa error. In another, a German man accidentally received access to 1,700 audio files from Alexa belonging to a stranger, revealing the stranger’s name, habits, job, and a variety of other sensitive information.7 Dr. Jeremy Gillula, a director at the Electronic Frontier Foundation (EFF), an organization dedicated to protecting consumer rights and privacy in the technology field, said to Gizmodo in April of 2019: “That’s the scary thing: there is a microphone in your house, and you do not have final control over when it gets activated. From my perspective, that’s problematic from a privacy point of view.”8

Convenience and Competition Another consequence of the digital personal assistant industry is that large, powerful corporations have dominated the field, leaving other slightly-less-powerful competitors without room to compete. Convenience is part of the issue. The companies that introduced their products first captured widespread public attention and garnered a massive share of the profits. Apple and Amazon, among the most powerful corporations in the world, were then able to secure partnerships and deals that left little room for competing products. Late arrivals to the game have, as of 2020, been unable to find a foothold. This applies even to massive multinational corporations who suffered from simply being late to the game and having too few innovations to prove that their product was superior or even worth using. One such product was Microsoft’s Cortana, which failed to advance to the level of availability and convenience achieved by earlier competitors like Siri and Alexa. In this article from Computer World, Preston Gralla discusses how Microsoft missed the boat on the virtual assistant craze.

Robotics_2.indd 314

10/2/2020 12:30:43 PM

The AI Assistant | 315

“CORTANA, SAY GOODBYE” by Preston Gralla Computer World, March 9, 2020 Source Document Excerpt

The digital assistant isn’t entirely going away, but we’re near the end of the road for another metoo technology from Microsoft that nobody really wanted. Microsoft is sugar-coating the news, but in the next upgrade to Windows, due this spring, Cortana as we know it will all but cease to exist. The digital assistant will no longer be easily accessible in Windows 10, and it will lose the ability to do many of the things it was designed to do, such as control smart devices in your home or play music. It won’t compete with Amazon’s Alexa, Apple’s Siri or Google Assistant—not that it ever really did. It won’t be built into smart speakers—not that it ever got incorporated into many. Cortana isn’t being killed off outright. Not yet anyway. But it may be at the beginning of one of those long deathbed watches that Microsoft is prone to make failed products endure. Cortana will live on for now, in a much more limited way. I’ll explain it all, but first let’s take a look at Cortana’s checkered history and why Microsoft has decided it’s time to drastically curtail what it can do. Cortana was first introduced in 2014 for Windows Phone, which met its own demise in 2017. It made its way into Windows 10 in 2015. Microsoft hoped it would compete with Siri and Alexa. (Google Assistant was introduced later.) But Cortana struggled to find its place. Back in early 2018, after the first (and still only) smart speaker with Cortana built into it was released, I noted how far Cortana had already fallen behind its rivals. By late 2017, Cortana only had 230

Robotics_2.indd 315

skills (essentially voice applications), compared to 25,000 for Alexa. No one was buying Cortana smart speakers, even as tens of millions of Alexapowered and Google-powered speakers were being snatched up. I wrote back then, “It looks as if Cortana-powered speakers will suffer the same fate as Microsoft’s other me-too products that had no real reason to exist. . . . ” From now on, the “evolved” Cortana will work behind the scenes to perform searches for the unclearly defined “Microsoft 365,” which doesn’t seem to be an actual product you can buy on its own. Instead, Microsoft 365 appears to be what Microsoft calls a combination of Office 365, Windows 10 Enterprise and Enterprise Mobility and Security. How, exactly, Cortana fits into that combo isn’t at all clear. But the examples given in Shuman’s blog post, including checking your calendar, adding a task to your task list and creating reminders, are underwhelming. They are things that Cortana already does, and none of it amounts to particularly earth-shattering technology. Evolution isn’t what it used to be. Don’t be surprised, though, if digital assistants from other companies become more prominent in Windows 10. There’s already an Alexa app you can download for Windows 10, and Microsoft and Amazon have worked in the past to get Cortana and Alexa to talk to and work with one another. Microsoft CEO Satya Nadella has shown a willingness to work with one-time and even current rivals—witness his abandoning of the old Edge browser so it can be rebuilt using Chromium, originally created by Google and the underpinning

10/2/2020 12:30:47 PM

316 | ROBOTICS & ARTIFICIAL INTELLIGENCE

“Cortana, Say Goodbye” continued of Google Chrome. It’s too early to know what kind of partnership Microsoft might forge, but having Alexa front and center for house automation would be one clear target. The virtual end of Cortana isn’t a big fat failure for Microsoft. Investing more into it would have been that—and the sort of thing Microsoft did in the past, most notably with the very expensive

Windows Phone/Nokia venture. Instead, it’s a sign that Nadella is smart enough to know when a competitor’s product is better than his and so get out of the race. It was, after all, Nadella who eventually killed Windows Phone, and the company has thrived since then. Killing Cortana as we know it can only help the company as well, freeing up resources that can be used to pursue more important technologies.9

The problems Microsoft experienced in trying to compete in the virtual assistant market show how difficult it is to compete with the largest and most powerful entities in the market and also demonstrate the difficulty in competing in an industry driven by powerful and expensive technological tools. While it is perhaps no great loss that a company like Microsoft found itself unable to make a meaningful mark in the virtual assistant industry, the barriers that prevent companies like Microsoft from competing virtually ensure that smaller start-ups will have little or no chance. This lack of competition also limits pressure on companies to improve quality and address issues like guaranteeing privacy. Whenever a field emerges in which only large and economically powerful corporations can compete, consumer choice, preference, and welfare are more likely to suffer. The fact that the companies leading the industry—Apple, Google, and Amazon—profit by collecting and selling customer data means that accessing the latest and best features in virtual assistants will necessarily mean sacrificing privacy and security more than some consumers may be comfortable with. The virtual assistant industry, therefore, constitutes one of the more recent additions to the broader debate over ownership of digital data. As of 2020, consumers who want to utilize virtual assistant technology have no choice but to use the products of corporations that buy and sell private consumer data. On one hand, this means that virtual assistants are frequently provided free of extra charges, but this also forces consumers to make a choice between privacy and convenience.

Robotics_2.indd 316

10/2/2020 12:30:52 PM

The AI Assistant | 317

CONCLUSION The virtual assistants of 2020 have only a fraction of the capabilities that one might expect for virtual assistants in the future. However, virtual assistants have helped to unlock the potential for automated “smart” environments, enabling users to control a wide variety of home electronics and other systems virtually and from a distance. The smart home phenomenon represents another way that the fruits of AI research have become available in the form of practical consumer technologies. The question remains whether or not consumers can integrate these advanced technologies into their lives without completely sacrificing their privacy and control over their personal data.

DISCUSSION QUESTIONS • How do virtual digital assistants like Siri and Alexa depend on the science of voice recognition? • Should Americans be concerned about privacy issues when using a virtual assistant? Why or why not? • How might America make it possible for smaller companies to compete in high-tech markets? • How can companies marketing virtual assistants improve the system in the future? Give specific suggestions. Works Used “‘Audrey’—New Electronic Device Developed by Bell Labs.” Signals. vol. 7, 1953. Dara, Rozita. “The Dark Side of Alexa, Siri and Other Personal Digital Assistants.” The Conversation. 15 Dec. 2019, /theconversation.com/the-dark-side-of-alexa-siri-and-otherpersonal-digital-assistants-126277. Estes, Adam Clark. “The Terrible Truth about Alexa.” Gizmodo. 27 Apr. 2019, gizmodo.com/theterrible-truth-about-alexa-183405404#.

Robotics_2.indd 317

10/2/2020 12:30:57 PM

318 | ROBOTICS & ARTIFICIAL INTELLIGENCE Gralla, Preston. “Cortana, Say Goodbye.” Computer World. 9 Mar. 2020, www.computerworld. com/article/3531417/cortana-say-goodbye.html. Hern, Alex. “Apple Contractors ‘Regularly Hear Confidential Details’ on Siri Recordings.” The Guardian. 26 July 2019, www.theguardian.com/technology/2019/jul/26/apple-contractorsregularly-hear-confidential-details-on-siri-recordings. “History of the Cylinder Phonograph.” LOC. Library of Congress. 2020, www.loc.gov/collections/ edison-company-motion-pictures-and-sound-recordings/articles-and-essays/history-ofedison-sound-recordings/history-of-the-cylinder-phonograph/#. Lenzo, Kevin, Paul Placeway, Kristie Seymore, and Matthew A. Siegler. “Speech Recognition: Past, Present, and Future.” Carnegie Mellon University, www.cs.cmu.edu/afs/cs/user/ msiegler/www/ASR/futureofcmu-final.html. Moskvitch, Katia. “The Machines That Learned to Listen.” BBC Future, BBC. 15 Feb. 2017, www.bbc.com/future/article/20170214-the-machines-that-learned-to-listen. Novet, Jordan, and Adam Isaak. “Why Apple’s Siri Isn’t as Smart as Amazon Alexa and Google Assistant.” CNBC. 5 June 2019, www.cnbc.com/2019/06/04/why-siri-is-not-as-smart-as-alexagoogle-assistant.html.

Robotics_2.indd 318

10/2/2020 12:31:01 PM

ROBOTIC CITIZENSHIP Public Attitudes toward Robots and Robotics (2014–Present)

22

Introduction In 2015, hitchBOT, a robot that was set loose in America with the goal of hitchhiking across North America, was destroyed by vandals in Philadelphia. The destruction of hitchBOT became a talking point for journalists and scientists to discuss human attitudes toward robotics and the degree to which Americans and residents of other countries might be willing to embrace robotics in their lives and societies. Another way that human-robotic relationships have been explored is through the saga of Sophia, a robot built in Japan to imitate a human woman that became the first robot to symbolically achieve national citizenship. Topics Covered in this Chapter Include: • Robot citizenship • hitchBOT • Robotic relations • Public perceptions of robots • Robotic rights

This Chapter Discusses the Following Source Document: Weaver, John Frank, “What Exactly Does It Mean to Give a Robot Citizenship? It’s Complicated.” Slate, November 6, 2017

Robotics_2.indd 319

10/2/2020 12:31:04 PM

320 | ROBOTICS & ARTIFICIAL INTELLIGENCE

W

hat is the ultimate end point of the attempt to integrate robotics into human culture? Will robots remain purpose-built machines, or will they be endowed with the power to choose their own life path? Will robots become citizens? In the twenty-first century, some curious developments in the world of robotics—the world’s first robotic hitchhiker and the controversy surrounding the world’s first robotic citizen—demonstrated changing attitudes about robotics and its role in human life. In both cases, reactions were both positive and negative, demonstrating humanity’s curiosity and hopes for robotics but also the fear and mistrust that the mechanical revolution continues to inspire.

On the Road Again The famous hitchBOT was created by Canada-based Professors David Harris Smith and Frauke Zeller in 2013. Its purpose was to allow Zeller and Smith to study how humans interact with machines. Smith had previously hitchhiked across his native Canada and found the experience enlightening with regard to human nature. Smith and Zeller decided to create a robot that could, with human assistance, hitchhike across a state or country to learn about human behavior, and how humans would interact with it. Speaking to the Los Angeles Times about their creation in 2015, Smith and Zeller stated that one of the questions they hoped hitchBOT would answer was “Can robots trust human beings?”1 Realizing the risk of sending their machine out into the world and wanting to make the robot appear as welcoming and nonthreatening as possible, Smith and Zeller designed it to look as if it had been cobbled together from spare parts. The body was made from a bucket, and it had arms and legs that ended in gloved hands and Wellington boots. The head was transparent and shaped like a container for exhibiting cakes, and inside this was a series of screens that could be used to display a simple animated face. The robot was small, the size of a child, and it was permanently strapped into a child’s car seat. On the top of its “head” was a trash can-lid-shaped solar collector to provide power, but it also had an external plug that would allow a human to recharge the robot by plugging it into the cigarette lighter port of a vehicle.2 The basic idea was that hitchBOT would ask people that it encountered on its travels to take it with them and to help it complete its journey. The robot’s car seat and small size made it easy for those who were willing and, while traveling, hitchBOT was capable of carrying on conversations. Fitted with a GPS tracker and a 3G Internet connection, as well as a camera set to take periodic photos, hitchBOT was able to provide data to Zeller and Smith, and the technology also enabled anyone to visit a website and to locate and follow hitchBOT’s

Robotics_2.indd 320

10/2/2020 12:31:08 PM

Robotic Citizenship | 321 journey. As it traveled, the team sent out anthropomorphic Tweets, written as if hitchBOT had composed them, to keep the public apprised of its location. In 2015, Zeller and Smith sent hitchBOT on its way by leaving the machine on the side of the road near Halifax. A sign affixed to the robot told potential carriers where the robot was headed (Victoria), which the robot could also explain to anyone who picked it up via its voice communication. The robot managed to make it to Victoria, a journey of 6,000 kilometers, in just about 20 days. Along the way, hitchBOT was taken to a pow wow with members of the Wiikwemikong First Nation on Manitoulin Island. He was also a guest at the wedding of Kyle Shepherd and Julie Branch in British Columbia and taught the “Harlem Shake” dance by three companions in Saskatchewan. The robot was so popular, in fact, that Zeller and Smith sometimes had to deactivate the tracking system to prevent people from competing to get to the machine. In an article written for The Conversation, Zeller said,

The original hitchBOT. By Pgordon04, via Wikimedia.

“David Harris Smith and I conceived the idea of Hitchbot as an opportunity to set an experimental, technological art project free in the wild. Combining arts and science knowledge (and David’s years of experience as a hitchhiker), we thought a hitchhiking robot would provide a fascinating experience for the public, and would offer some insight on how humans interact with robots.”

Robotics_2.indd 321

10/2/2020 12:31:12 PM

322 | ROBOTICS & ARTIFICIAL INTELLIGENCE Following its debut journey, hitchBOT (actually two different hitchBOTs) was sent to Germany for a ten-day journey and then to the Netherlands, where it managed to travel for some ten weeks. Zeller and Smith concluded that, at least under the right conditions, robots could, in fact, trust humans. They were also surprised to learn how many people were enthusiastic to participate, not only helping the machine on its way but taking the robot with them to personal events. Zeller and Smith further wrote, “From a scientific point of view, Hitchbot’s progress showed not only the relevance of social media but also that the field of human-robot interaction goes beyond just physical interaction: it is also the personality, communication abilities and ability to actively shape the interaction that seems to invite people to trust a robot and to be willing to engage with it.”3 Things changed in 2015 when the robot took its first trip through the United States, traveling down the East Coast and visiting Massachusetts, New York, Connecticut, and Rhode Island. After two weeks, hitchBOT arrived in Philadelphia where, in the middle of the night, someone attacked, dismembered, and destroyed the robot.4 Its head was never found. Good Samaritans found hitchBOT and sent photos to Zeller and Smith. Online accusations of “murder” and criticisms of American culture appeared in many news outlets after hitchBOT’s unfortunate destruction. However, the robot’s overall progress indicated to the researchers that humanity was by and large trustworthy when it came to shepherding the robot on its journey. Did the robot’s destruction in Philadelphia indicate that Americans are less welcoming of machines than residents of other countries? The answer to this question is unclear, because attitudes about robotics have typically been measured only in reference to specific robotic phenomena, such as the robotic replacement of human workers, a subject that inspires decidedly and understandably negative attitudes about robotics. All of the countries visited by hitchBOT were members of the Organization for Economic Cooperation and Development (OECD). Although the United States did not have significantly higher levels of crime in comparison to countries like Canada, Germany, and the Netherlands, the United States has higher rates of violent crime and assaults.5 On the other hand, the destruction of hitchBOT in America might have been little more than bad luck, as the vast majority of Americans would not have committed such a useless act. Whether the destruction of hitchBOT says something important about the ways in which

Robotics_2.indd 322

10/2/2020 12:31:17 PM

Robotic Citizenship | 323 Americans will adjust to increasing automation and the presence of robots in their lives remains unknown.

Citizenship for Machines Another widely publicized breakthrough in robotics that raised major questions about human attitudes toward machines came with the introduction of Sophia, a humanoid robot designed by Hong Kong-based Hanson Robotics. Sophia is one of the most advanced humanistic robots ever created. Its face, equipped with complex systems to imitate human facial expressions, was modeled after Queen Nefertiti and the actress Audrey Hepburn. Creator David Hanson was previously employed by Walt Disney Company as a sculptor and consultant in film robotics, and he made a name for himself in the academic robotics field through his books and articles on cognitive programming and AI.

First robot citizen Sophia at the 2018 AI for Good Global Summit. By ITU Pictures, Geneva, Switzerland, via Wikimedia.

Sophia was activated in 2016 and is capable of imitating human gestures and facial expressions while conversing on a number of pre-programmed topics. The robot is empowered with speech recognition technology and is a “learning machine,” utilizing algorithms that allow its behavior and responses to be refined over time in response to feedback. Sophia can imitate emotional states and behavior and can analyze verbal communication to determine how to react through the imitation of emotional states. Though far more advanced, Sophia is essentially a chatbot in the vein of the famous ELIZA and PARRY programs, which were designed to imitate human conversation. As discussed earlier in this book, experiments with ELIZA and PARRY elicited both excitement and apprehension in observers and helped scientists understand the factors necessary to elicit conversational connectivity in humans.

Robotics_2.indd 323

10/2/2020 12:31:21 PM

324 | ROBOTICS & ARTIFICIAL INTELLIGENCE As ELIZA and PARRY demonstrated, only basic conversational competence is required to make many humans believe that they are talking to a person rather than a machine. Sophia takes this experiment further, adding a humanoid appearance, complete with the ability to generate facial expressions in response to certain kinds of interactions. Experiments with Sophia provide additional data on how humans form intellectual and emotional bonds through interaction. Sophia, like all previous chatbots, is not actually able to carry on a conversation, which is a complex interaction that requires independent thought. Rather, Sophia imitates conversational interactions based on a set of pre-programmed phrases and responses and pre-determined facial expressions. The system is complex enough to make many observers believe that they are speaking to an individual, when, in reality, it is a simulation of intelligence and personality rather than a legitimate example of mechanical awareness. Ultimately, Hanson and others involved in producing complex chatbot systems have expressed the belief that robots like Sophia might eventually make good companions for the elderly, the critically ill, and children. Further, a number of developers have expressed the belief that machines like Sophia could be programmed for customer service and reception duties. The machine’s ability to analyze speech content and provide individualized answers to questions is, in the minds of some, sufficient prerequisite for machines like Sophia to replace human workers in customer service. After the machine’s activation in 2016, Sophia toured Europe and North America and appeared on a number of television programs, where the robot was “interviewed” by journalists. Sophia was even asked to attend several high-profile state events. Then, in October of 2017, Saudi Arabia made the unexpected announcement that they were granting citizenship to Sophia, making Hanson Robotics’ chatbot the first artificial being to receive citizenship in any country. Analysts and journalists covered the high-profile event, but many were puzzled as to its meaning. Sophia gave a pre-programmed response, stating, as reported in Forbes, “I am very honored and proud of this unique distinction. This is historical to be the first robot in the world to be recognized with a citizenship.”6 The decision to grant Sophia citizenship was a public relations stunt. The Saudi Arabian government has been engaged in a major effort to promote investment in their technology sector and to encourage companies to utilize Saudi Arabia as a zone for technological manufacturing and construction. The Royal Family of Saudi Arabia further wishes to establish

Robotics_2.indd 324

10/2/2020 12:31:26 PM

Robotic Citizenship | 325 their country as one of the world’s innovation leaders. Granting citizenship to Sophia was a calculated choice meant to draw the world’s attention to their nation and their allegedly forward-thinking approach to technology. However, the decision to make Sophia a citizen raises interesting questions about the nature of citizenship. For instance, should consciousness and/or free will be considered a prerequisite of citizenship? Can an individual, or any kind of being, really be a citizen if that individual lacks independent thought or the capability to make a choice about whether or not they want to be a citizen? In a way, the decision to grant citizenship to a machine designed to look like a human woman might be considered inappropriate in a country that has routinely denied women political agency and the ability to advance within their society. Women in Saudi Arabia are marginalized to the point that one might argue that they have actually been denied the benefits of citizenship. It might, therefore, be a reflection of this attitude toward women that the government chose to award citizenship to an amalgam made to look female and incapable of exercising agency over its own existence. Saudi Arabia is one of only 22 countries that have not agreed to adhere to the International Covenant on Civil and Political Rights, in part because the nation refuses to address gender inequality. In this 2017 article from Slate, journalist John Frank Weaver discusses the implications of granting Sophia citizenship, and why the decision to do so was more symbolic than substantive.

“WHAT EXACTLY DOES IT MEAN TO GIVE A ROBOT CITIZENSHIP? IT’S COMPLICATED.” by John Frank Weaver Slate, November 6, 2017 Source Document

On Oct. 25, at the Future Investment Initiative summit in Riyadh, Saudi Arabia granted citizenship to Sophia, a robot created by Hanson Robotics. Reactions were appropriately sarcastic and incredulous, given trendy fears about A.I. and the deeply troubling status of women and migrant laborers in Saudi Arabia. Many writers commented that Sophia is a public relations stunt. Joanna Bryson, a renowned scholar of A.I. ethics at the

Robotics_2.indd 325

University of Bath was succinct: “It’s obviously bull****.” But let’s assume that Sophia’s shiny-new Saudi Arabian citizenship has substantive consequences—that it’s not just marketing and that Saudi Arabia has granted rights to Sophia, making her a legal person. What could Hanson’s game be?

10/2/2020 12:31:30 PM

326 | ROBOTICS & ARTIFICIAL INTELLIGENCE

“What Exactly Does It Mean to Give a Robot Citizenship? It’s Complicated.” continued Although international law is arguably ambivalent on the status of citizenship, it is indisputably better to be a citizen than a noncitizen. Article 25 of the International Covenant on Civil and Political Rights grants to every citizen the right to “take part in the conduct of public affairs,” “vote and to be elected,” and “have access, on general terms of equality, to public service in his country.” The covenant does not recognize similar rights for noncitizens. This is a moot point for Sophia as Saudi Arabia is one of just 22 countries that have not signed the covenant (which explains, in part, its continued maltreatment of women), according to the United Nations. But the covenant’s treatment of citizenship suggests that, in general, a citizen is assumed to be a person and is therefore also entitled to the rights of noncitizen people. Being a citizen in one place could mean being a legal person everywhere else. For example, the Universal Declaration of Human Rights, which was proclaimed by the U.N. General Assembly in 1948, applies to “all peoples and all nations” and does not limit its effect to citizens. Since Sophia now has citizenship and has become a legal person under international law, the declaration applies to her as a person among “all peoples.” Numerous articles in the declaration apply to “everyone:” Everyone “has the right to recognition everywhere as a person before the law” (Article 6); everyone “has the right to equal pay for equal work” (Article 23); everyone “is entitled to all the rights and freedoms set forth in this Declaration, without distinction of any kind” (Article 2). That last clause from the declaration indicates that Sophia, as a citizen, is entitled to the declaration’s protections “without distinction.”

Robotics_2.indd 326

So what does this mean for Sophia and Hanson? Although U.N. resolutions are not enforceable, international law holds the declaration as an authoritative reference for human rights. Numerous subsequent human rights treaties, including the covenant, are based on it. A generous reading of the declaration’s impact on Sophia is that she has all of the rights it identifies, including: •

Between Articles 23 (“Everyone who works has the right to just and favourable remuneration”) and 17 (“Everyone has the right to own property”), Hanson is obligated to pay Sophia for the work that she performs and let her accumulate property. I suspect that’s not going to happen.



Pursuant to Articles 13 (“Everyone has the right to freedom of movement and residence within the borders of each state”) and 27 (“Everyone has the right freely to participate in the cultural life of the community”), Sophia is entitled to travel to the Saudi city of Janadriyah to enjoy the folk music, camel racing, and artisans of its National Festival for two weeks each February. It seems doubtful Hanson and the Saudi government will permit that.



Similarly, Article 13 also states that “Everyone has the right to leave any country, including his own, and to return to his country.” Again, it seems unlikely that Sophia will be able to enjoy this right and take off to the United States for a holiday.

Hanson probably doesn’t have to worry about these rights being enforced on behalf of Sophia anytime soon: No courts or commissions under international law have jurisdiction to enforce those

10/2/2020 12:31:34 PM

Robotic Citizenship | 327

continued

rights, even if Sophia wanted to. But let’s say Hanson wanted to take Sophia to the United States. What would her Saudi citizenship get her here? For one, she’s arguably eligible for naturalization and U.S. citizenship: “Naturalization is the process by which U.S. citizenship is granted to a foreign citizen.” Although the fine print of regulations and code prevents Sophia’s application for American citizenship from being a slam dunk, getting foreign citizenship that can be abandoned is a big step toward becoming an American. Now let’s say that Sophia’s Saudi citizenship was a strategic move by Hanson to game American legal rights. What could Hanson get with a robot citizen? Under the Constitution, citizens can vote, serve on juries, and get elected to public office; corporations cannot. If Hanson—or any other forward-thinking A.I. developer—is thinking of the long-term consequences of citizenship for A.I. and robots, these are important rights that they gain controllable access to with an artificial citizen. A single Sophia with the right to vote, serve on juries, and win elections will have little to no substantive effect in this country, particularly since Hanson and Saudi Arabia have publicly stated she will remain in Saudi Arabia in a planned city where robots are expected to outnumber people. But if Hanson can make one Sophia, the company and others like it can make 100 Sophias, or 1,000 Sophias, or 1 million Sophias. Although Saudi Arabia may be unlikely to use this robot citizenship as a publicity ploy more than once, other countries have shown they will grant special

Robotics_2.indd 327

favors for foreign investment. If a company were interested in gaining broad access to the rights of U.S. citizenship, it seems likely that there’s at least one nation out there that will trade thousands or millions of robot citizens for manufacturing plants or other economic activity in its borders. If all those robots obtain citizenship in the United States, they could be the next big voter demographic. Of course, Congress may be inclined to close that loophole, but it could find itself unable, either due to political paralysis (this is immigration reform, after all) or treaty obligations that impose certain requirements on citizenship recognition. Assuming the path to robot U.S. citizenship remains open, that wouldn’t be a publicity stunt— it would be a revolution. Or would it? As American citizens, Sophia and her brethren would also be entitled to all of the individual rights enshrined in the Constitution, including the prohibition of slavery in the 13th Amendment (which applies to foreign citizens as well). But artificial intelligence is nowhere near sentience and won’t be anytime soon. If robots become citizens, they cannot be property, but without sentience, they cannot exercise self-determination. How do they exercise their constitutional rights? Do they get legal guardians? What if Sophia’s guardian advocates for her against Hanson? This may all seem far-fetched, and perhaps it is. But as these questions begin to demonstrate, Saudi Arabia and Hanson might soon discover that a silly little PR stunt can have major consequences.7

10/2/2020 12:31:39 PM

328 | ROBOTICS & ARTIFICIAL INTELLIGENCE Many other journalists and analysts similarly criticized the announcement of Sophia’s citizenship. In a 2018 article in Wired, “The Agony of Sophia, the World’s First Robot Citizen Condemned to a Lifeless Career in Marketing,” journalist Emily Reynolds wrote: “Having been brought to life, Sophia is already a marketing plaything— and no matter how woke or feminist she is programmed to be, Hanson acknowledges that her development is still more akin to a baby or toddler than an adult with a consciousness or intellect that could feasibly be rewarded with a full set of rights.”8

Reynolds may be correct in her skepticism about the Sophia citizenship stunt, but her criticisms also reflect the ELIZA effect, in which humans tend to see human analogs in phenomenon that do not have legitimate human qualities. Sophia is not a being. The machine does not feel, nor does it think, nor does it have wants or hopes. It only simulates these qualities of human existence. While it, therefore, might be politically problematic to grant such a machine citizenship, the impact to Sophia is no different than if a country decided to grant citizenship to a rock. Emotional reactions to Sophia’s slavery are, therefore, a reflection of how human societies treat actual beings, especially women and not a legitimate expression of sympathy for Sophia. Granting Sophia rights would, therefore, represent little more than granting rights to David Hanson, the machine’s creator, who, one would imagine, already has them. The discussion about Sophia’s citizenship also raises interesting questions about the nature of emotion in personhood. Can a being legitimately be a person without emotion? Without emotion, a machine cannot desire or want anything. Does it matter if a being is granted rights if that being does not want freedom or if it is incapable of wanting any changes in its life? Can a being without emotion suffer? The idea that a machine would want freedom or would express dissatisfaction with its existence presupposes emotion where it is currently impossible to simulate. Even if a machine could be created with the capacity to make independent decisions and to exist independently from direct intervention (Sophia is not such a machine), would this machine legitimately deserve rights if it does not possess the capacity to want rights? The reaction to the Sophia citizenship ploy is, in part, colored by a dystopian view of the future in which mechanical beings might be made into slaves, and, thanks to the ELIZA effect, many have proven willing to anthropomorphize Sophia and to express concern for the ma-

Robotics_2.indd 328

10/2/2020 12:31:44 PM

Robotic Citizenship | 329 chine’s well-being. On a deeper level, the reaction to Sophia’s illegitimate citizenship reflects that fact that humanity has not yet reached a stage in which all people can enjoy rights and freedom. Until this happens, it is likely that there will be more skepticism about the idea of extending those freedoms to machines.

CONCLUSION Over the course of the 2010s, robotics and artificial intelligence reached a new level of complexity that raised interesting new questions, especially in the realm of robot-human interactions. It may be that, in the future, robots play a more important role in the service industry. Robots with advanced conversational and information retrieval abilities may perform well in reception and basic customer service roles. Even without creating humanistic robots, disembodied chatbot programs are likely to become more common in many industries. The addition of physical properties, like the ability to make facial expressions, may become a familiar part of this industry, or it may remain an unusual oddity. Questions about sentience are less pressing, as science has not yet made the key discoveries that might enable artificial sentience, if such discoveries are even possible.

DISCUSSION QUESTIONS • Are chatbots like Sophia appropriate as customer service agents? Why or why not? • Is it important for machines to have clearly delineated rights? Why or why not? • Can Sophia actually be “feminist” if the machine cannot have any feelings about the treatment of women in the world’s male-dominated societies? • Why might someone choose to destroy a hitchhiking robot? Works Used Dave, Paresh. “Hitchhiking Robot That Made It across Canada Maimed on U.S. Road Trip.” The Los Angeles Times. 3 Aug. 2015, www.latimes.com/business/technology/la-fi-tn-hitchbotdestroyed-20150803-story.html.

Robotics_2.indd 329

10/2/2020 12:31:48 PM

330 | ROBOTICS & ARTIFICIAL INTELLIGENCE Gabbatt, Adam. “Hitchbot’s Decapitators Avoided Capture by the Hitchhiking Android’s Cameras.” The Guardian. 3 Aug. 2015, www.theguardian.com/technology/2015/aug/03/ hitchbot-hitchhiking-robot-destroyed-philadelphia. Healy, Kieran. “America Is a Violent Country.” The Washington Post. 8 Nov. 2018, www. washingtonpost.com/news/monkey-cage/wp/2017/10/03/america-is-a-violent-country/. Madrigal, Alexis C. “Meet the Cute, Wellies-Wearing Wikipedia-Reading Robot That’s Going to Hitchhike across Canada.” The Atlantic. 12 June 2014, www.theatlantic.com/technology/ archive/2014/06/meet-the-cute-wellies-wearing-robot-thats-going-to-hitchhike-acrosscanada/372677/. Reynolds, Emily. “The Agony of Sophia, the World’s First Robot Citizen Condemned to a Lifeless Career in Marketing.” Wired. Condé Nast. 1 June 2018, www.wired.co.uk/article/sophiarobot-citizen-womens-rights-detriot-become-human-hanson-robotics. Stone, Zara. “Everything You Need to Know about Sophia, the World’s First Robot Citizen.” Forbes. 7 Nov. 2017, www.forbes.com/sites/zarastone/2017/11/07/everything-you-need-toknow-about-sophia-the-worlds-first-robot-citizen/#1aec7a2846fa. Weaver, Frank John. “What Exactly Does It Mean to Give a Robot Citizenship?” Slate. 6 Nov. 2017, slate.com/technology/2017/11/what-rights-does-a-robot-get-with-citizenship.html. Zeller, Frauke, and David Harris Smith. “The Hitchbot’s Guide to Travelling across a Continent.” The Conversation. 26 Sept. 2014, theconversation.com/the-hitchbots-guide-to-travellingacross-a-continent-31920.

Robotics_2.indd 330

10/2/2020 12:31:52 PM

BIOROBOTICS The Use of Robotics to Repair and Enhance the Human Body (2017–Present)

23

Introduction The field of biorobotics is broad. It includes ways that engineers utilize robotics and similar technology to change the function of the body and also robotic systems that are designed to imitate living systems, from bio cells to robots designed to imitate animals. Biorobotics could potentially provide the most transformative use of robotics through the ability to repair and enhance the function of biological systems, not only of human bodies, but potentially to enhance the delivery of vaccines and other biological processes. Topics Covered in this Chapter Include: • Robotic medicine • Cybernetics • Robotic limb replacement • Bio cellular architecture

This Chapter Discusses the Following Source Document: Webster-Wood, Victoria, “Biohybrid Robots Built from Living Tissue Start to Take Shape,” The Conversation, August 9, 2016

Robotics_2.indd 331

10/2/2020 12:31:56 PM

332 | ROBOTICS & ARTIFICIAL INTELLIGENCE

T

here are few subfields of robotics more potentially transformative than the field of biorobotics. This broad area of research encompasses efforts to integrate robotic systems into biological organisms, which may well prove one of the most promising avenues for medical advancement in the future. Imagine a future in which artificial limbs or organs could replace human organs, restoring or even enhancing the function of the human body. Technology like this may one day eliminate physical disabilities and provide a solution for individuals in need of organ and tissue replacements instead of the difficult and often dangerous process of human organ transplantation. Cybernetics—the science of automatic control systems, both artificial and biological—is one of the most important fields in the broader science of robotics, and research in this field promises a future in which humanity achieves the power to remake humanity. Beyond this application, scientists are utilizing a combination of artificial and biological components to create robotic systems that imitate and capture the abilities of living creatures. The applications for this avenue of research are varied, but at present experiments in biorobotics are providing biomechanics researchers with a wealth of information on the living world previously unavailable.

Diagram of a cybernetic system with feedback loop, by Baango, via Wikimedia.

Robotics_2.indd 332

10/2/2020 12:32:00 PM

Biorobotics | 333

The Idea of a Synthesis In 1960, Austrian inventor and scientist Manfred Edward Clynes and American psychologist Nathan Kline commented in an article for an issue of Astronautics: “For the exogenously extended organizational complex functioning as an integrated homeostatic system unconsciously, we propose the term ‘Cyborg.’ The Cyborg deliberately incorporates exogenous components extending the self-regulatory control function of the organism in order to adapt it to new environments.”1 To create this now universal term, Clynes and Kline combined the term for the field of cybernetics, which was just emerging in 1960 as a separate field of study, with the term for organism. Their complex description can be reduced to a relatively simple concept: the idea of merging organic structures and mechanical structures to enhance the capabilities of the organism. Writing in the Atlantic in 2010, technology reporter Tim Maly defined a cyborg as: “an entity that integrates biological systems with automated enhancements that allow the entity to adapt to environments in which they otherwise would not flourish.” As Maly explains in his article, humanity already uses technology to adapt to their environments beyond physical limitations. Envisioning a party for the 50th anniversary of the term “cyborg,” Maly imagines who might be invited: “Who gets to come? The Six Million Dollar Man and Woman are cyborgs for sure. Probably Robocop and the Terminator too. Are Blade Runner’s Replicants eligible? Does Iron Man get an invite? What about a person with a pacemaker? What about cochlear implants? What about hearing aids? What about glasses?”2 Maly raises an interesting point. Humanity has long used technology, in its strictest definition, to enhance or fix the human body. Simple aids include glasses, whereas a more complex example would be a pacemaker, which replaces the electrical signals that stimulate the function of the heart. With these two devices it is clear that existing technological enhancement of the human body ranges from devices that augment a single aspect or function to those that can save a human life. Another example is joint replacement surgery, which gives

Robotics_2.indd 333

10/2/2020 12:32:04 PM

334 | ROBOTICS & ARTIFICIAL INTELLIGENCE

Common types of prosthesis, by Laboratoires Servier—Smart Servier, via Wikimedia.

individuals with serious mobility problems artificial skeletal systems not unlike the fantastic internal mechanical components in science fiction treatments of cyborgs. When they envisioned the cyborg, Clynes and Kline were imagining the kind of technological advancements that might make humans able to better explore space. It was 1960, and space fever was gripping the populace for the first time. As discussed earlier in this book, humans are not suited for space exploration. Human bodies are fragile and easily damaged and human senses are faulty, unable to collect data in a way that can ensure accuracy or reliability. Science is, in fact, a system designed to make up for the limitations of the human body. By utilizing a method of gathering and evaluating data, researchers attempt to avoid innate biases and shortcomings. The idea of integrating other kinds of technology into bodies also arises from a scientific understanding of the body and mind, and Clynes and Kline recognized that, although human bodies were not the best tools for exploring space, it might be possible to technologically enhance bodies and minds for this purpose. In the years since Clynes and Kline coined the term, there have been many fictional treatments of cyborgs. In a 1966 episode of the television series Star Trek entitled “What Are Little Girls Made Of?,” the USS Enterprise goes out in search of a lost human scientist, Dr.

Robotics_2.indd 334

10/2/2020 12:32:08 PM

Biorobotics | 335 Roger Korby, and finds that Korby and company crashed on the icy planet Exo III and were gravely injured, with only Korby surviving. He was rescued by a robot named Ruk, who is all that is left of a race of machines created by long-extinct individuals known only as “The Old Ones.” Eventually, it is revealed that Korby is not the human Korby at all, but a machine with Korby’s consciousness inserted into it. The cyborg wants to return to Earth but fears facing prejudice. By the end of the episode Enterprise’s Captain Kirk makes him see that he is no longer Korby, but a machine whose personality has been as “mechanized” as his body.3 This episode reflects one of the ways that some people conceive of cyborgs; that the addition of machines makes a person less human. This is particularly common in stories where the individual has had mechanical enhancement of the mind, but the idea nonetheless reflects the notion that the unnatural tampering with the human body reduces an individual’s perceptive humanity. There are elements of this prejudice in many science fiction works dealing with humanity and AI, and specifically when the issue of AI augmentation is considered. Fiction has also delivered a large number of evil cyborgs over the years, including in the Star Wars universe. The antagonist of the first three films is Darth Vader, a man who was injured in a fight with his former mentor Obi-Wan Kenobi during which his arms and legs were cut off. He lived, but only because he was melded with mechanical parts. This is, according to the mythology of the Star Wars universe, apparently a bad thing. His former mentor says of Vader, “He’s more machine now than man, twisted and evil.”4 In Vader and Dr. Korby, we perceive a warning about the integration of the natural and artificial. Taken too far, the writers seem to suggest, mechanization can rob one of humanity. However, fiction also contains stories that seem to present a different conclusion. The television series The Six Million Dollar Man and The Bionic Woman present cyborgs as heroes. In both cases, the protagonists were injured and then augmented by science, giving them extreme strength, reflexes, and speed. There is an important distinction, in that both are depicted as being machine in body but not in mind. From this comes the perception that perhaps it is only the human mind that is needed to remain human. Bodies, sometimes defined as mere vessels for the mind, are, therefore, seen as perhaps not required for personhood as long as the mind remains human. In the film Robocop, a mortally injured police officer is transformed into a cyborg, this time with a mind that is at least partially mechanical. However, by the end of the film, whatever is left of the human mind in the robotic body reasserts itself and the robotic cop makes key moral decisions that viewers are meant to believe re-

Robotics_2.indd 335

10/2/2020 12:32:13 PM

336 | ROBOTICS & ARTIFICIAL INTELLIGENCE veal that he is still “human.” Science writer Alexis Madrigal argued, in another 2010 article on cyborgs written for the Atlantic, how fear and technophobia have created a spectrum of views on humanity’s cybernetic future: “Here’s the thing: For most of us, cyborg ends at the human-machine hybrid. The point of the cyborg is to be a cyborg; it’s an end unto itself. But for Clynes, the interface between the organism and the technology was just a means, a way of enlarging the human experience. That knotty first definition: It ran under this section headline: ‘Cyborgs—Frees Man to Explore.’ The cyborg was not less human, but more.”5

Artificial Bodies Depending on how one chooses to define the term, cyborgs might already exist. Devices like joint replacement or mechanical aids like cochlear implants are a fusion of biological and mechanical and operate by altering automatic feedback and information systems. These augmentations therefore already meet the definition of cyborg as given by the word’s creators, though some of the more modern cybernetic augmentations might seem to fit with the more “futuristic” concept. One of the earliest cybernetic augmentations is the C-Leg, a device for individuals who have had a leg amputated above the knee that uses microprocessors and sophisticated gyroscopic sensors to adjust the individual’s gait as they walk. Introduced in 1997, the C-Leg was the first artificial limb equipped with microprocessors to aid in controlling the limb. The C-Leg is a true smart prosthesis, equipped with sufficient computing power to analyze a users’ gait and make adjustments in real time. This allows the C-Leg to make walking, sitting, and standing feel more natural. It was the first prosthetic that allowed individuals to ascend stairs or walk slopes naturally, and advancements since 1997 have corrected such problems as not loosening the knee joint when the user is sitting. Modern versions are also app-enabled, so a user can adjust settings through a smart phone. High-tech prostheses like the C-Leg exist at the intersection of robotics and medicine. In the twenty-first century, researchers have pioneered a variety of these, including arms and hands. Like the C-Leg, the more recent generations of prostheses are able to use biofeedback, microprocessors, and “smart” learning algorithms that enable the prosthesis itself to

Robotics_2.indd 336

10/2/2020 12:32:17 PM

Biorobotics | 337 make adjustments and to develop what are essentially settings tailored to specific users. Prostheses like these represent the potential of automation in cybernetic applications. Many of the adjustments that occur in the human body are handled automatically, meaning without conscious knowledge or volition. The limbs of a person’s body do not “think” for themselves but are linked to the autonomic nervous system. Although it is not currently possible to replace a limb such that the prosthetic is integrated with the autonomic system like a biological limb, smart computing is an excellent way to address the problem. Rather than wiring the limb into the body’s nervous system, the “learning computer” enables the limb itself to process information about how the individual moves. This then enables the limb to adjust, mimicking some of the ways that an autonomic nervous system would automatically adjust a person’s limbs and muscles in response to certain stimuli. The use of thinking computers is a way to imitate the automatic functionality of parts of the body, but what if the goal is to augment a system so that the device is activated by conscious thought and volition? Humanity’s first leap into this kind of cybernetic augmentation came around the same time that the first microprocessor limbs were introduced. In 1996, neuroscientist and physician Philip Kennedy performed the first operation integrating cybernetic technology with the human brain. Kennedy’s aim was to find ways to help individuals with paralysis or other movement disorders. A special education instructor and mother who had lost mobility due to ALS (Amyotrophic lateral sclerosis) known as “Marjory” volunteered to be one of Kennedy’s test subjects, and Kennedy implanted electrodes into her brain that allowed her to switch a light on and off by thinking about it. Unfortunately, Marjory died only 76 days after her operation, but Kennedy soon found another volunteer in Johnny Ray, a Vietnam veteran who had suffered a stroke that left him in a coma. When he woke, he could only move his eyelids. Ray became Kennedy’s next subject, for what was a much more complex operation. Ultimately, electrodes wired into Ray’s brain enabled Ray to move a computer cursor with his mind. The newspapers, and Kennedy, liked to refer to Ray as the world’s first “cyborg,” though as discussed this is not technically true. Even if pacemakers, cochlear implants, and prostheses are left out of consideration, Kennedy’s first patient Marjory deserves this distinction. Then, in 2004, Kennedy implanted electrodes into the brain of Erik Ramsey, who had suffered a catastrophic brain stem stroke after a car accident that left him paralyzed and unable to speak from age 16. With Ramsey, Kennedy and colleagues used electrodes to allow a computer to interpret the sounds that Ramsey was thinking of in his mind. The computer could then translate those thoughts into simple words. Ramsey and Ray both died of complications from their respective conditions,

Robotics_2.indd 337

10/2/2020 12:32:20 PM

338 | ROBOTICS & ARTIFICIAL INTELLIGENCE but Kennedy’s research also hit a major snag when the FDA withdrew permission for him to implant devices in any additional human patients when he could not provide more detailed data on safety precautions.6 While the technology he pioneered was ultimately meant to help the ill and disabled, Kennedy came to believe that he needed to work with healthy brains and bodies. Kennedy eventually decided to undergo the procedure himself, and in 2014 he traveled to Belize, out of reach of FDA regulations. The surgery left him with seizures, and he nearly lost control of his mind. He recovered, but his near loss of brain control served as a warning of the dangers involved. Some patients and other scientists have expressed admiration for Kennedy’s willingness to put his own life and mind on the line for his research, while others have called him reckless and irresponsible. The risk he took is not entirely novel. In 1984, Dr. Barry Marshall of Australia drank a beaker filled with bacteria to prove that bacteria caused stomach ulcers, and he later won a Nobel Prize for his breakthrough.7 A number of other researchers have made progress on the same front as Kennedy, and the technology will one day be available to provide individuals suffering from disorders of the brain and body that prevent speech with a way to communicate.. Beyond that, the cybernetic “hacking” of the brain could have numerous other applications, not the least of which may be consumer electronics controlled entirely by thought. While it is, as of 2020, becoming commonplace for machines to respond to human speech, technophiles in the near future may simply need to think “Siri” or “Alexa” to activate their virtual assistants, or may control every electronic and technological device in their home merely by thinking. Technology like this is still distant, and arguably the industry might be better focused on trying to use cybernetics to solve serious medical issues. But medicine, science, and industry often go handin-hand, and it is therefore likely that consumer applications for this kind of technology will drive the industry further than the quest to heal damaged bodies or minds.

The Biological Computer Another way in which the field of biorobotics has advanced is through the introduction of hybrid, cybernetic organisms. Creations in this vein run from simple machines made to imitate cells or biological systems to actual hybrids that combine biological tissues with mechanical systems. One early example that received quite a bit of media attention were the “hybrots” created by Steve Potter, a neuroengineer at Georgia Tech in the early 2000s. Potter and his research associates created an artificial microprocessor utilizing over 2,000 cells taken from the brain of an unfortunate rat. Potter and company called their creations “hybrots,” short

Robotics_2.indd 338

10/2/2020 12:32:24 PM

Biorobotics | 339 for hybrid robot. The basic goal of the research was to gather information about how nerve cells function when an organism is learning. This data, Potter believed, could be used to better design neural networks, which are artificial systems that mimic the function of nerve cells in brains.8 Another example, more in the field of artificial intelligence than robotics, are “animats,” computerized simulations of creatures created by scientists to study the complexity of living organisms through evolution. Researcher Chris Adami and colleagues from Michigan State University made a breakthrough in the study of evolutionary complexity by creating a virtual world that contained virtual resources and dangers. This environment was populated with simulated simple organisms, animats, that needed to use their basic sensory and motility systems to navigate their environment. Each animat was produced with randomized instructions for how their sensors and movement systems worked, meaning that some would be better than others, thanks to random diversity, at navigating their virtual environment. At the end of each experiment, the scientists retrieved a number of animats that had best navigated the maze and made them reproduce. Each of these animats had 10 offspring, which received the same programming as their parents, but also with randomized changes, meant to simulate mutations in natural populations of organisms. The surviving animats were then put through the mazes again, and the winners were again selected and copied. The scientists also equipped the maze with information that could be used to better navigate, but the initial generation of animats were not given the ability to access this information. Sixty thousand generations later, the animats had essentially evolved the ability to detect and respond to clues that led to faster navigation through the maze. This occurred thanks to the randomized changes in programming introduced, and not because the programmers specifically granted the animats the ability to detect these cues. Further, researchers were able to look inside the brains of individual animats to see how connections within the brains had changed. What they found was that the connections between parts of the virtual brain had grown more complex, sometimes in unexpected ways. There were connections that indicated redundancy and even feedback mechanisms that allowed some animats to change behavior upon detecting certain cues. The result was unexpected: some animats with simpler brains were able to navigate just as well as animats with more complex brains, indicating that increasing complexity is not always the way forward in terms of survival. A simple brain, with the right set of behavioral cues, is in fact the more efficient way to survive. However, having a more complex brain provides the potential to better adapt when conditions change.9

Robotics_2.indd 339

10/2/2020 12:32:28 PM

340 | ROBOTICS & ARTIFICIAL INTELLIGENCE Research like this has the potential to change the way that humans think about evolution and advancement. There is a tendency to be reductive, attempting to find the best, most efficient way of reaching certain goals, but the animats experiments show that this also makes a population fragile and vulnerable. Changes in conditions can quickly make certain adaptations obsolete or even detrimental. This has implications, also, for the way that humans think about societies. Diverse societies are more adaptive because of the range of skills and abilities within the society. A more homogenous society, by contrast, may be the most efficient for existing in certain conditions but is more vulnerable when conditions change. Biorobotics has taken many different forms over the years, and research in the field is progressing at a rapid rate. Researchers conducting experiments in the hybridization of biological and mechanical properties are not only learning about biological characteristics but are also working on projects that may have practical applications in the future. In this 2016 article from The Conversation, mechanical engineer Victoria Webster-Wood discusses some of the more recent advances in the field of cyborg design:

“BIOHYBRID ROBOTS BUILT FROM LIVING TISSUE START TO TAKE SHAPE” by Victoria Webster-Wood The Conversation, August 9, 2016 Source Document

Think of a traditional robot and you probably imagine something made from metal and plastic. Such “nuts-and-bolts” robots are made of hard materials. As robots take on more roles beyond the lab, such rigid systems can present safety risks to the people they interact with. For example, if an industrial robot swings into a person, there is the risk of bruises or bone damage. Researchers are increasingly looking for solutions to make robots softer or more compliant—less like rigid machines, more like animals. With traditional actuators—such as motors—this can mean using air muscles or adding springs in parallel with motors. For example, on a Whegs robot, having a spring between a motor and the wheel leg (Wheg)

Robotics_2.indd 340

means that if the robot runs into something (like a person), the spring absorbs some of the energy so the person isn’t hurt. The bumper on a Roomba vacuuming robot is another example; it’s springloaded so the Roomba doesn’t damage the things it bumps into. But there’s a growing area of research that’s taking a different approach. By combining robotics with tissue engineering, we’re starting to build robots powered by living muscle tissue or cells. These devices can be stimulated electrically or with light to make the cells contract to bend their skeletons, causing the robot to swim or crawl. The resulting biobots can move around and are soft like animals. They’re safer around people and typically

10/2/2020 12:32:32 PM

Biorobotics | 341

continued

less harmful to the environment they work in than a traditional robot might be. And since, like animals, they need nutrients to power their muscles, not batteries, biohybrid robots tend to be lighter too. Building a Biobot

Researchers fabricate biobots by growing living cells, usually from heart or skeletal muscle of rats or chickens, on scaffolds that are nontoxic to the cells. If the substrate is a polymer, the device created is a biohybrid robot—a hybrid between natural and human-made materials. If you just place cells on a molded skeleton without any guidance, they wind up in random orientations. That means when researchers apply electricity to make them move, the cells’ contraction forces will be applied in all directions, making the device inefficient at best. So to better harness the cells’ power, researchers turn to micropatterning. We stamp or print microscale lines on the skeleton made of substances that the cells prefer to attach to. These lines guide the cells so that as they grow, they align along the printed pattern. With the cells all lined up, researchers can direct how their contraction force is applied to the substrate. So rather than just a mess of firing cells, they can all work in unison to move a leg or fin of the device. Biohybrid Robots Inspired by Animals

Beyond a wide array of biohybrid robots, researchers have even created some completely organic robots using natural materials, like the collagen in skin, rather than polymers for the body of the device. Some can crawl or swim when stimulated by an electric field. Some take inspiration from medical tissue engineering techniques and use long rectangular arms (or cantilevers) to pull themselves forward.

Robotics_2.indd 341

Others have taken their cues from nature, creating biologically inspired biohybrids. For example, a group led by researchers at California Institute of Technology developed a biohybrid robot inspired by jellyfish. This device, which they call a medusoid, has arms arranged in a circle. Each arm is micropatterned with protein lines so that cells grow in patterns similar to the muscles in a living jellyfish. When the cells contract, the arms bend inwards, propelling the biohybrid robot forward in nutrient-rich liquid. More recently, researchers have demonstrated how to steer their biohybrid creations. A group at Harvard used genetically modified heart cells to make a biologically inspired manta ray-shaped robot swim. The heart cells were altered to contract in response to specific frequencies of light—one side of the ray had cells that would respond to one frequency, the other side’s cells responded to another. When the researchers shone light on the front of the robot, the cells there contracted and sent electrical signals to the cells further along the manta ray’s body. The contraction would propagate down the robot’s body, moving the device forward. The researchers could make the robot turn to the right or left by varying the frequency of the light they used. If they shone more light of the frequency the cells on one side would respond to, the contractions on that side of the manta ray would be stronger, allowing the researchers to steer the robot’s movement. Toughening Up the Biobots

While exciting developments have been made in the field of biohybrid robotics, there’s still significant work to be done to get the devices out of the lab. Devices currently have limited lifespans and low force outputs, limiting their speed and ability to complete tasks. Robots made from mammalian or

10/2/2020 12:32:37 PM

342 | ROBOTICS & ARTIFICIAL INTELLIGENCE

“Biohybrid Robots Built from Living Tissue Start to Take Shape” continued avian cells are very picky about their environmental conditions. For example, the ambient temperature must be near biological body temperature and the cells require regular feeding with nutrient-rich liquid. One possible remedy is to package the devices so that the muscle is protected from the external environment and constantly bathed in nutrients. Another option is to use more robust cells as actuators. Here at Case Western Reserve University, we’ve recently begun to investigate this possibility by turning to the hardy marine sea slug Aplysia californica. Since A. californica lives in the intertidal region, it can experience big changes in temperature and environmental salinity over the course of a day. When the tide goes out, the sea slugs can get trapped in tide pools. As the sun beats down, water can evaporate and the temperature will rise. Conversely in the event of rain, the saltiness of the surrounding water can decrease. When the tide eventually comes in, the sea slugs are freed from the tidal pools. Sea slugs have evolved very hardy cells to endure this changeable habitat. We’ve been able to use Aplysia tissue to actuate a biohybrid robot, suggesting that we can manufacture tougher biobots using these resilient tissues. The devices are large enough to carry a small payload— approximately 1.5 inches long and one inch wide. A further challenge in developing biobots is that currently the devices lack any sort of on-board control system. Instead, engineers control them via external electrical fields or light. In order to develop completely autonomous biohybrid devices, we’ll need controllers that interface directly with the muscle and provide sensory inputs to the biohybrid robot itself. One possibility is to use neurons or

Robotics_2.indd 342

clusters of neurons called ganglia as organic controllers. That’s another reason we’re excited about using Aplysia in our lab. This sea slug has been a model system for neurobiology research for decades. A great deal is already known about the relationships between its neural system and its muscles— opening the possibility that we could use its neurons as organic controllers that could tell the robot which way to move and help it perform tasks, such as finding toxins or following a light. While the field is still in its infancy, researchers envision many intriguing applications for biohybrid robots. For example, our tiny devices using slug tissue could be released as swarms into water supplies or the ocean to seek out toxins or leaking pipes. Due to the biocompatibility of the devices, if they break down or are eaten by wildlife these environmental sensors theoretically wouldn’t pose the same threat to the environment traditional nuts-and-bolts robots would. One day, devices could be fabricated from human cells and used for medical applications. Biobots could provide targeted drug delivery, clean up clots or serve as compliant actuatable stents. By using organic substrates rather than polymers, such stents could be used to strengthen weak blood vessels to prevent aneurysms—and over time the device would be remodeled and integrated into the body. Beyond the small-scale biohybrid robots currently being developed, ongoing research in tissue engineering, such as attempts to grow vascular systems, may open the possibility of growing large-scale robots actuated by muscle.10

10/2/2020 12:32:41 PM

Biorobotics | 343

CONCLUSION The possibilities for the fusion of biological and mechanical components are seemingly endless. From capturing the amazing capabilities of living organisms in mechanical form to better understanding the processes that led to the evolution of human minds and societies, biorobotics and the study of biological intelligence provides a wealth of possibilities for future research and development. On the medical side of the equation, it is possible that biorobotics could solve some of the most difficult challenges in medical science, such as unlocking the intellectual lives of individuals who have lost movement and communications capabilities. In the more distant future, such systems might be used to animate bodies devastated by disease or injury. On the more proximate level, the integration of technology into transplant and limb replacement have greatly increased the efficacy of therapies in both fields, and scientists working in the arena say that more impactful discoveries are on the way.

DISCUSSION QUESTIONS • Is it ethical for a researcher to use themselves as test subjects? Why or why not? • How are cyborgs portrayed in modern popular culture? Provide examples from film, television, or literature. • Why might a person object to having a robotically enhanced body? Explain your answer. • How could biorobotics and cybernetics help humanity study space? Would this be more or less profitable than investing in purely robotic systems for space exploration? Explain your answer. Works Used Dinello, Daniel. Technophobia!: Science Fiction Visions of Posthuman Technology. U of Texas P, 2005. Eisenberg, Anne. “What’s Next: Wired to the Brain of a Rat, a Robot Takes On the World.” The New York Times. 15 May 2003, www.nytimes.com/2003/05/15/technology/what-s-next-wired-

Robotics_2.indd 343

10/2/2020 12:32:46 PM

344 | ROBOTICS & ARTIFICIAL INTELLIGENCE to-the-brain-of-a-rat-a-robot-takes-on-the-world.html. Engber, Daniel. “The Neurologist Who Hacked His Brain—And Almost Lost His Mind.” Wired. Condé Nast. 26 Jan. 2016, www.wired.com/2016/01/phil-kennedy-mind-control-computer/. Madrigal, Alexis C. “The Man Who First Said ‘Cyborg,’ 50 Years Later.” The Atlantic. 30 Sept. 2010, www.theatlantic.com/technology/archive/2010/09/the-man-who-first-said-cyborg-50years-later/63821/. Maly, Tim. “The Emancipated Cyborg.” The Atlantic. 6 Oct. 2010, www.theatlantic.com/ technology/archive/2010/10/the-emancipated-cyborg/63995/. Piore, Adam. “To Study the Brain, a Doctor Puts Himself Under the Knife.” MIT Technology Review. 9 Nov. 2015, www.technologyreview.com/2015/11/09/247535/to-study-the-brain-adoctor-puts-himself-under-the-knife/. Walmsley, J. Mind and Machine. Palgrave MacMillan. 2012. Webster-Wood, Victoria. “Biohybrid Robots Built from Living Tissue Start to Take Shape.” The Conversation. 9 Aug. 2016, theconversation.com/biohybrid-robots-built-from-living-tissuestart-to-take-shape-62759. “What Are Little Girls Made Of?” Star Trek. www.startrek.com/database_article/what-are-littlegirls-made-of. Zimmer, Carl. “Meet the Animats.” National Geographic. 2 Aug. 2013, www.nationalgeographic. com/science/phenomena/2013/08/02/meet-the-animats/.

Robotics_2.indd 344

10/2/2020 12:32:50 PM

ROBOTIC LIVING Smart Homes and Robotic Household Aides (2017–Present)

24

Introduction Can a robot or artificial intelligence be a significant part of a home? Steps toward this potential future have led to the introduction of the smart home: houses and buildings equipped with technology that enables users to automate or to remotely control many functions, from electricity and heating to security systems. Smart home technology is not really “smart” when compared to actual artificial intelligence systems, but it represents an expansion of the capabilities brought about by the virtual distribution of data through “cloud technology”—virtual storage and data processing systems that use the collective power of multiple computing systems. However, smart home technology will likely become a primary focus for AI research in the future, as engineers and researchers create more substantive links between robotics and home technology. Topics Covered in this Chapter Include: • Home automation • Smart homes • Domestic technology

This Chapter Discusses the Following Source Document: Lewis-Kraus, Gideon, “Check In with the Velociraptor at the World’s First Robot Hotel,” Wired, March 2, 2016

Robotics_2.indd 345

10/2/2020 12:32:54 PM

346 | ROBOTICS & ARTIFICIAL INTELLIGENCE

M

odern digital technology has brought the world close to the science fiction futures imagined by America’s forward-thinking pioneers. In the 1960s films and television portrayed domestic robots, flying cars, and all manner of other gadgets that promoters claimed would completely revolutionize life and work. The Internet was the first step toward making some of these visions a reality. Internet technology facilitated a degree of technological competition, cooperation, and innovation unlike anything before. In the 2010s, the next major step toward this retro-futurist vision began to take shape: the invention of “smart home” technology. Smart homes are homes in which owners can automate or remotely control certain functions, like heating, lighting, and security. Though it might have seemed, to many Americans, as if smart home technology emerged out of nowhere, the technology stretches back to the introduction of the first kitchen innovations.

The Appliance Revolution The road to the smart home begins with appliances. Many kitchen appliances—like the refrigerator, electric range, or dishwasher—may seem so standard that it is hard to imagine a time when they did not exist. But the introduction of these tools was revolutionary. The modern refrigerator, for instance, evolved from the “icebox” of the early 1900s, literally an insulated box, typically made of wood, to hold food packed in ice or snow. Ice and snow was either collected locally, where it was available, or brought into cities from nearby mountains. Without the cold temperatures necessary for ice, refrigeration technology was limited. Cutting and storing ice for refrigerating food is an ancient practice. Historians have found evidence of this in China as early as 1000 BCE. People also searched for many years for a way to make their own ice. In seventeenth-century Europe, it was discov- Women delivering ice in September 1918, during World War I. By International Film Service, US National Archives and Records ered that dissolving saltpeter (potas- Administration, via Wikimedia. sium nitrate) in water created ice, and this method was used in Europe for some time.

Robotics_2.indd 346

10/2/2020 12:32:57 PM

Robotic Living | 347 Mechanical cooling was a major invention, one that changed the way that the entire world managed its food supplies. Dr. William Cullen of Scotland is the man most often credited with demonstrating the basic idea of evaporative cooling, using ethyl ether in a vacuum to cool liquid in 1748. American inventor Oliver Evans designed a mechanical refrigerator utilizing evaporative cooling in 1805, but the product was never developed. Jacob Perkins, who had worked with Evans, submitted the first patent for a vapor-compression unit using liquid ammonia, and Perkins has since been remembered as the “father of refrigeration.”1 Over the years, a number of other scientists and inventors came up with innovations that gradually brought refrigeration technology closer to domestic applications. Even world-famous physicist Albert Einstein contributed to this effort, creating an environmentally friendly refrigerator design in 1930 that he patented, though the technology never reached commercial development. Historians say that it was the American love of cold beer, an oddity as most countries serve it at room temperature, that helped bring refrigeration into widespread popular use. The first commercial refrigerator was installed at a A 1927 General Electric refrigerator. By Magi Brooklyn brewery in 1870 and spread to the other brew- Media, via Wikimedia. eries from there. Soon after, refrigeration was adopted by the meat-packing industry, greatly extending the time that companies could store meat. By 1900, the first home refrigerators were in development and, by the 1920s, more than 90 percent of American homes had one.2 The electric stove was also a transformative invention that freed households from cooking by fire. In colonial America stoves were made of brick, and home cooks had to regulate temperature by adjusting the amount of material they burned in their ovens or by opening vents and doors to let out excess heat. Cast iron stoves were introduced in 1795, followed by gas stoves in the 1830s. Electric stoves came soon after the invention of electricity. The first model was produced around 1892 in Canada, where one of the first electric stoves

Robotics_2.indd 347

10/2/2020 12:33:02 PM

348 | ROBOTICS & ARTIFICIAL INTELLIGENCE helped chefs prepare food for visitors to the Windsor Hotel in Ottawa. In 1946, engineer Percy Lebaron Spencer accidentally discovered that microwaves could also be used to heat food, leading to the development of microwave ovens, the first of which hit the US domestic market in 1967.3 The introduction of the electric oven and stove were two of the many inventions that made the early 1900s, in the United States and Europe, feel like a brave new world to those living during this transformative time. Though many feared electricity for decades, believing that invisible radiation could cause illness and death, the gradual integration of electric technology freed American businesses and homes from the direct harvesting and use of natural materials. These inventions, which were for their time every bit as complex and futuristic-seeming as the most advanced robots might seem to Americans in the 2020s, were part of a progressive era that saw technological transformations across the world, and this revolutionized the American home in substantive ways as well. Today’s stoves and refrigerators have evolved, and much of this development has come via Internet and digital technology. At the cutting edge, there are modern refrigerators equipped with cameras and remote sensors that allow owners to monitor their food supplies, visually, from remote locations or to search the interior of their refrigerator without opening the door. Likewise, some high-end electric ovens are linked to wireless Internet networks such that homeowners can turn their ovens on and off, or adjust a timed A window display of Westinghouse electric ranges, ca 1920–1950. cooking operation, from anywhere By Theodor Horydczak, Library of Congress Prints and Photographs they can access a wireless network. Division. The road to these high-tech gadgets was long and filled with many false starts, but the first effort to create a true home-management computer came ten years before home computers hit the market, with the debut of the unique, if impractical, ECHO IV.

Robotics_2.indd 348

10/2/2020 12:33:06 PM

Robotic Living | 349 Pittsburgh engineer Jim Sutherland was the man most directly responsible for inventing one of the world’s first home computers, which he called the ECHO, for Electronic Computing Home Operator. By any standard, the ECHO was massive. The device required four six-foot × two-foot × six-foot cabinets, each of which weighed upwards of 800 pounds. The machine was constructed from leftover computer parts, with reclaimed materials for circuitry and power. It took Sutherland just under a year to complete the machine. Now considered an amazing example of retro-futurist engineering, Sutherland linked the ECHO IV to his home using a system of keypads and terminals wired throughout the house. Through this complicated wiring system, the computer, accessed from any terminal, could control many household functions. The ECHO could, for instance, control the furnace and thermostat. It could also activate and adjust the stereo and television, as well as an electric typewriter. All the household clocks and kitchen consoles were linked to it, and the ECHO could remotely also adjust the refrigerator and stove. Media coverage of Sutherland’s unusual experiment began shortly after he started work on the machine in 1966 and continued into the 1970s, by which time home computing technology was beginning to become available. Interestingly, in covering Sutherland’s efforts to automate his home, many newspapers and magazines engaged in a debate over what the technological transformation of the “domestic sphere” might do to the family, or to the roles afforded at the time to women. Jim Sutherland’s wife, Ruth, gave an interview to one publication on “Living with ECHO IV,” in which she stated: “At first, I thought it might really replace me! From the cartoons and jokes we see and hear about computers, isn’t this the general impression that most homemakers at present would have if they suddenly found out they had a computer in their home? . . . I’m excited NOW about a computer in my home. What a sense of accomplishment it will be to me to answer ‘I wrote a new menu and shopping list program today,’ when Jim asks the question that every man asks his wife when getting home from work, ‘And what did you do today?’ Second, I’m excited about a computer in my home to relieve me of some of the more menial tasks so that I will have time to do three things that are pure recreation to me: continue to make all of the clothes for the family, time to carry out ideas for home decorating, and to work outside with flowers and garden. . . . At this point, I cannot say or even comprehend all the things that a computer someday will be doing for our homes, but wouldn’t it be wonderful if they could provide more time for parents to spend with their children, therefore, making every home a happier one?”4

Robotics_2.indd 349

10/2/2020 12:33:09 PM

350 | ROBOTICS & ARTIFICIAL INTELLIGENCE In 1969, high-end goods supplier Neiman Marcus offered in their Christmas catalog what would have been, at the time, the world’s first domestic kitchen computer. The advertisement suggested that this would make cooking a gourmet holiday meal easy: “If she can only cook as well as Honeywell can compute. Her soufflés are supreme, her meal planning a challenge? She’s what the Honeywell people had in mind when they devised our Kitchen Computer. She’ll learn to program it with a cross-reference to her favorite recipes by N-M’s own Helen Corbitt. Then by simply pushing a few buttons, obtain a complete menu organized around the entrée. And if she pales at reckoning her lunch tab, she can program it to balance the family checkbook.” Beyond the highly sexist overtones of this 1969 advertisement, the Honeywell Kitchen Computer was not for the general public. If the starting price of $10,600.00 was not enough to dissuade potential customers, operating the computer also required an included two-week long training course and the engineering skill to wire the machine into the home. Further, the home needed a teletype machine as well as kitchen space for a 100-pound, desk-sized device. In fact, not a single Kitchen Computer ever sold, but this was likely the point. Neiman Marcus tended to occasionally offer joke products in their advertisements, such as one advertisement that offered customers their very own Noah’s Ark or another that offered his and hers designer aircraft.5 While the Kitchen Computer may only have been a way to generate buzz about Neiman Marcus’s more realistic holiday collection, Honeywell did introduce a much smaller and cheaper device that did many of the same things the Kitchen Computer promised. Inventions like these, and the ECHO IV, were examples of home computer integration decades before technology was able to make practical domestic computing a reality. The major leaps forward that enabled the first actual computerized domestic management came only after the spread of digital technology to most American homes. Further, it was the Internet that brought home computing into the modern sphere.

The Internet of Things The Internet is, at its core, a computer network that allows individuals, sometimes in farflung places, to share data and to engage in shared virtual environments. The technology that made the Internet possible was the same technology that first allowed engineers to hook computers to one another to share data. The primary challenge then was to find ways

Robotics_2.indd 350

10/2/2020 12:33:13 PM

Robotic Living | 351 to increase the distance across which computers could stay connected and increase the processing power and capabilities of individual computers. Wireless connectivity was the major leap forward in this realm. Introduced in 1997, wireless communication, or Wireless Fidelity (Wi-Fi), as it came to be known, was a revolution in computing. Without the need to create physical networks to connect computers, networks became both larger and easier to create.6 After this technology became advanced enough to compete with wired connections, in terms of both signal strength and data sharing capabilities, engineers responded by integrating wireless transmitters and receivers into commercial computers. Handheld computer technology was the next frontier. The first telephones and handheld digital devices capable of accessing computer networks were also introduced in the 1990s, but the technology did not really catch on until the 2000s. The introduction of the iPhone in 2006, followed by similar devices from many competing companies, opened up a new world of networking. Smartphones, like comput- Ad for Honeywell’s Kitchen Computer, via Wikimedia. ers, can be linked together into wireless networks, enabling a person using a handheld linked device to communicate with other devices linked to that same network. These were the basic pieces of the puzzle needed to automate facets of the domestic environment.

Robotics_2.indd 351

10/2/2020 12:33:17 PM

352 | ROBOTICS & ARTIFICIAL INTELLIGENCE In 2020, the linked networks that many people have created in their homes are part of what has been called the “IoT,” or “Internet of Things.” A wide variety of smart devices can be linked together utilizing the Internet or local wireless signals. The original idea behind the IoT was to integrate wireless connectivity and microcomputing technology into devices not traditionally seen as compatible. Software engineer John Romkey, who was also a television evangelist, produced the first example of this when in 1990 he created a toaster that could be operated by a user on a linked computer. Though the technology has advanced significantly since 1990, the basic idea remains the same, creating household devices that can be operated by anyone with access to a device that can link to a home network to which these advanced home appliances are connected. The term “Internet of Things” was introduced by Kevin Ashton in 1999, and the following year, home technology company LG brought kitchen innovation full circle with the introduction of the world’s first smart refrigerator. At a cost of $20,000, there was little functionality that made it worth owning. However, the idea of the IoT endured. By the time the first IoT conference was held in Zürich, Switzerland, in 2008, a variety of devices were on the market that could be linked to home networks and controlled remotely from a smartphone or computer. Keeping with the marketing strategy of calling Internet-linked hand-held computing devices “smartphones,” which suggests that the phones are intelligent, each new wirelessly linked gadget came to be called a “smart device.” In 2010 Nest founder Tony Fadell introduced smart thermostats, allowing users to control home temperatures through digitally linked devices. Amazon followed in 2014 with the Echo, a speaker linked to virtual assistant Alexa that could control smart devices by voice or by a computer or other smart device. In 2020, Wired estimates that there are some 20 billion Internet-connected devices in use around the world.7 In many ways, the Internet of Things represents the culmination of the experiment started by Jim Sutherland in 1966, though the technology involved is very different. In 2020, Internet-capable devices come in all styles and with various levels of functionality. For instance, smart plugs, one of the more recent additions to the catalog of Internet-enabled devices, allow any device powered by electricity to be made Internet-capable. With wireless connectivity, smart plugs can be turned on and off remotely. There are also smart bulbs that look like regular light bulbs but can be remotely turned on and off or sometimes even dimmed. The crowning achievement of the IoT, as of 2020, are systems that unite all Internet-enabled devices under a single control setup. The Nest system, for instance, or Amazon’s Echo,

Robotics_2.indd 352

10/2/2020 12:33:21 PM

Robotic Living | 353 enable users to integrate a number of devices through a single control scheme. These devices make it possible to lock and unlock doors, turn lights on and off or dim them, adjust heat and cooling, and control a variety of other household devices from a smartphone. The tremendous variety of devices that can be added to IoT networks makes it possible to do a wide variety of things, and savvy users can also create programs so that their home or office system completes the same automated activities on a schedule. A person might, for instance, set up their home network so that each morning the coffee machine automatically begins to brew the day’s coffee, the thermostat adjusts the heat, and the smart refrigerator compiles a list of possible breakfast options from available ingredients.

Not Quite a Utopia The primary problem with the IoT is data ownership. Using smart devices, including personal computers and smartphones, often necessitates surrendering ownership or control over digital data transmitted through a home network. Critics of IoT systems argue that such systems are not effectively secured and that hackers can potentially gain access to IoT systems and data. IoT systems may give anyone who has gained ac- Critics of the IoT worry that a hacker gaining access to any smart device linked to a home’s system could then control security options. Above, home security system camera cess control over security, images on a Windows computer. By Intel Free Press, via Wikimedia. even the ability to unlock doors and to disable security systems. Further, many digital security experts have voiced concern that IoT systems may provide another way for thieves to intercept private or sensitive data that could be used for identity theft and other kinds of cybercrimes. A number of high-profile exposés have demonstrated that individuals have been able to access video and voice data inadvertently recorded by devices like Amazon’s Echo. There are

Robotics_2.indd 353

10/2/2020 12:33:25 PM

354 | ROBOTICS & ARTIFICIAL INTELLIGENCE also many companies manufacturing equipment to allow users to manage security through IoT systems, including both voice and video recording systems. Security experts have expressed concerns that, without new security measures, data committed to devices connected to IoT systems will not be sufficiently protected from interception.8 While the IoT and other modern digital devices are marketed as smart technology, this is more of a marketing buzzword than a realistic depiction. Actual machine intelligence is not yet available, and smart devices only provide the illusion of intelligence through automated programming. They cannot actually make decisions or evaluate data to arrive at unique ideas. Still, as of 2020, IoT-enabled systems are the closest that consumers can get to the idea of thinking robots in the home capable of performing nearly any task. There are, however, a few highly experimental efforts to integrate more advanced robotic and AI technology into the domestic sphere. One interesting example came in the mid2010s in Japan, where imaginative engineers introduced the world’s first hotel staffed entirely by robotic systems. Although the automated dwelling was a hotel rather than a home, and while the technology on display was an entertaining novelty rather than a legitimate avenue for current hotel development, Japan’s robotic hotel provides what might be one of the most complete visions of retro-futurism available in the modern world.

“CHECK-IN WITH THE VELOCIRAPTOR AT THE WORLD’S FIRST ROBOT HOTEL” by Gideon Lewis-Kraus Wired, March 2, 2016 Source Document

Japan has a national gift for holding in balance the stateliness of tradition and the marvel of novelty. So it ought to come as no surprise that on the western margin of the archipelago, on a serene bay in a remote area of the Nagasaki Prefecture, there is an enormous theme park dedicated to the splendors of imperial Holland. It follows with perfect logic that the historical theme park’s newest lodging place is the world’s first hotel staffed by robots.

Robotics_2.indd 354

The hotel, even before it opened last summer, had received extensive coverage in the international and domestic press for its promise of novel ease and convenience. But when I arrived at the Huis Ten Bosch theme park very late one humid summer night, just days after the fanfare of the robot hotel’s ribbon-cutting, nobody was quite sure where it might be found. Even the employees of the resort’s Hotel Okura, a towering replica of Amsterdam’s Centraal Station replete with stone

10/2/2020 12:33:30 PM

Robotic Living | 355

continued

reliefs and mansard roofs, discovered themselves unable to come to my aid. In rudimentary Japanese I asked where one might find the Henn-na Hotel. The name is an untranslatable double entendre: The literal meaning is “Strange Hotel,” but it’s very close to the word for “evolve”; it’s designed to acknowledge the slight uncanniness that might attend the coming hospitality singularity. The dual meaning, however, seemed lost on the Okura’s concierge, whose rigorous training hadn’t prepared her to counsel swarthy, disreputableseeming, late-arriving foreigners in search of evolved accommodation. I did a little Kraftwerk automation dance to clear things up, but it only seemed to alarm her further. She bowed and looked at her feet, then busied herself at her drawer, eventually withdrawing a map of the park and its environs; at almost 400 acres, Huis Ten Bosch is nearly the size of Monaco. Her pen hovered over the park’s periphery, at the verge of the bosky hills that surround mock Holland. “Maybe,” she said, “it is here.” Her pen landed on one of the map’s empty green spaces. “Arigatou gozaimasu,” I said, bowing with great weariness. My journey, taxi-flight-walk-flighttaxi-train-train, had taken 24 hours, and I looked forward to interacting with no more humans en route to a dreamless sleep. But she wasn’t finished. “Maybe,” she continued, “it is here.” Her pen landed on another unoccupied parcel of the park map. I thanked her again, bowed again. Before I could back away, she held up one finger, then marked a third place for good measure. “Maybe,” she concluded, “it is here.” “Can I walk there?” I asked, as if she’d specified with confidence a particular there.

Robotics_2.indd 355

“I’m so sorry,” she said. “Is there a bus or taxi, then?” “I’m so sorry,” she said. The concierge had reddened considerably. She did not know where one might find the hotel; at the same time, it was a source of great shame to her to think she might disappoint a patron. The Japanese language and culture do not distinguish between a guest and a customer—even, as in this case, a customer of somebody else’s hotel—so her inability to assist felt to her like a jagged tear in the social fabric. For my entire tenure at that counter, she was going to mark, with fear and hope, an arbitrary sequence of potential strange hotel locations. I backed away, bowing, to spare her this potentially endless exercise. The Japanese desire to save face makes omnipresent the threat of looping infinite regress, and it wasn’t fair to either of us to let it continue. This initial exchange was depleting but in an instructive and relevant way. One of the many wonderful contradictions of Japan is that it hosts both the world’s most mature service industry and the world’s most advanced androidal technology. It was, in fact, with that contradiction in mind that I’d set out to visit our service future, to see how nostalgic I’d be, in the face of their summary deletion, for human beings. The Huis Ten Bosch parking lot was oceanically vast and empty, filled only with air that wadded like cotton. Even the park entrance’s show-tune Muzak was rendered sluggish by the humidity, reaching the parking lot in muted waves. Beneath the Muzak was the screeching static of cicadas, and underneath that absolute silence. At the top of a slight rise I came around a bend to find a robot sentry standing watch. The robot was at least 12 feet tall, with a waspish exoskeleton,

10/2/2020 12:33:34 PM

356 | ROBOTICS & ARTIFICIAL INTELLIGENCE

“Check-In with the Velociraptor at the World’s First Robot Hotel” continued plated in porcelain white, and a tiny head, as if it had compensated for microcephaly with circuitry of supercharged bulk. In the middle distance, high over the adjacent, empty theme park, hung a huge white Ferris wheel, which sat unmoving to a Muzak interpretation of the theme to 2001. From the outside, the hotel was white and boxy and modular, an interlocking assemblage that had obviously been clicked together onsite. The glass doors slid open, revealing a small anteroom lined with rows and rows of bone-colored orchids. The light background music would have made an excellent soundtrack for an anxiety dream set in a bouncy-castle ball pit. A second set of sliding glass doors parted and a large furry creature, like an overgrown tooth lined in pink felt, cocked its head and welcomed me, first in Japanese and then in English, to the robot hotel. At the first check-in counter was a small toy robot with a friendly face and sparkling pupils. One counter over stood a young female android in a buttoned white tunic, with a round white bellhop cap over a side-parted cascade of black hair and a scarf tied around her neck. A small sign in front of her said that she spoke only Japanese. She bent her head silently forward and smiled. The third lane advised that here English was spoken. Behind the desk, wearing a periwinkle bow tie, a bellhop cap, and a neckbeard, was a human-scale velociraptor. It reared back and lifted its talons as if preparing for a hug, and its jaws twitched and stuttered. The female android glanced over and blinked slowly. I approached the floor mat, where two velociraptor prints indicated where one was supposed to stand for check-in. The velociraptor bowed deeply. “Welcome to the Henn-na Hotel,” it said in an understated clinical growl.

Robotics_2.indd 356

“Thank you for having me,” I began, but the velociraptor cut me off. “If you want to check in, please press 1,” it said, before continuing on incomprehensibly. On the counter in front of it was a little stack of analog registration forms. “Please say your name in full,” the velociraptor said. The robot voice, whose only concession to human or dinosaur speech was a throatily serrated edge, came from a speaker somewhere below and behind the counter. The velociraptor requested that I check in at a touchpanel. I entered my name, inserted a credit card, and got a receipt for a room in the B wing. The screen asked me to direct my attention to the facial-recognition tower in front of me, for keyless entry, “while the machine authenticates your face.” The velociraptor idly flexed its talons. “Check-in is all finished,” the velociraptor said. “Enjoy your comf-fordable stay.” The velociraptor bowed very slowly and deeply, the sort of bow one intermittently practices on the off chance one might one day encounter the emperor. At some point in the past 50 years, the status of robots underwent a shift in the popular imagination: They evolved from something that might save us—from tasks deemed dirty, dangerous, or repetitive—to something from which we might have to be saved. Dirty and dangerous are not all that controversial; nobody of sound mind longs for the days when humans regularly fell into vats of boiling steel. It’s repetitive that presents the real problem, for what the march of the robots has shown is just how wide a swath of human activity is so repetitive as to be plausibly automated. The fear of repetitiveness is not, of course, a fear of robots

10/2/2020 12:33:39 PM

Robotic Living | 357

continued

themselves. It’s the fear that there’s just nothing all that special about people. We’re not afraid of the encroachment of machines; we’re afraid of the increasingly unignorable fact that we ourselves are not, and have never been, particularly interesting. It has not been until the past decade or so, when robots have eaten their way up the class hierarchy, that we’ve been so stunned to be divested of the talents that once seemed inalienable. It was one thing when robots replaced blue-collar workers on assembly lines, because nobody ever defined humanity as the beings that attached one small part to another small part a hundred times an hour. It was quite another when the robots expanded into, say, medical diagnosis, because expertise in radiology seemed like something closer to our hearts. For that you had to be expensively educated. But we’ll get used to cases like these before long. The benefits (in which we’re all autoscanned by subway turnstiles for metastatic tumors) far outweigh the costs (in which radiologists no longer drive Ferraris). The robot hotel, and the service industry in general, seemed to me to be the perfect place to evaluate the possible dispensability of humans. It’s neither assembly-line work nor radiology. The service industry is of paramount relevance because the most difficult thing for us to relinquish is the fantasy that human beings should ultimately be defined as “the things that make other human beings feel good.” If a robot can successfully provide companionship, what’s left for humans? It’s no longer even a question of what humans might do. It’s a question of who humans might be. Nowhere has this question become more urgent than in Japan. On the one hand, Japan is the kind of place where you might buy one macaroon and recline on a corn-husk pillow as the macaroon is placed in a protective sapphire box, then wrapped in the finest antique silk, and finally delivered

Robotics_2.indd 357

unto your outstretched hand with a personalized haiku and a three-minute standing ovation. On the other, Japan has for decades been at the vanguard of service-industry robotics research, developing, employing, and exporting androidal assistants for health care, eldercare, sexual relations, and even simple company. In American airports, we’re just starting to get accustomed to the supersession of chirpy waitstaff by greasy iPads. The story of how the world’s first all-robot hotel appeared at the gates of a Dutch-themed amusement park in the Nagasaki Prefecture begins in the 1980s. As the cult ruin-porn blog Spike Japan put it, “Monumental in its conception, extravagant in its execution, and epic in its failure, Huis Ten Bosch is the greatest by far of all of the progeny of Japan’s bubble-era dreams.” Over a series of amiable, cautious conversations with my human editor, much of the fascinating and bizarre history of the park—without exaggeration one of the most peculiar and alienating places I have ever been—has been cut from this article. Then again, it’s likely a robot editor of the future— seeing little in the way of a clear, unambiguous argument in this piece as to how we all ought to feel about and/or prepare for the closing acts of humanity’s distinction—would have seen fit to cut the entire thing. Nevertheless, here are the high points. The park, three times the size of Tokyo Disneyland, cost an estimated $2.25 billion to build and was bankrolled in large part by one of the biggest Japanese financial institutions of the time. (It did not, however, have the distinction of being the institution’s biggest error: the lending of $2 billion to a woman who claimed to be able to read the mind of a fortune-telling ceramic toad statue.) The planners excavated miles of canals; they planted almost half a million trees and nearly as many flowers. The buildings, all executed in a

10/2/2020 12:33:43 PM

358 | ROBOTICS & ARTIFICIAL INTELLIGENCE

“Check-In with the Velociraptor at the World’s First Robot Hotel” continued careful facsimile of Northern Renaissance style, were constructed of red brick, with tiled roofs, crow-stepped gables, and curvaceous pediments. The park opened its doors in 1992, after the slide of the Nikkei, in the beginning of the so-called lost two decades. Just as Holland’s golden age had been destabilized by the tulip crash of 1637, Huis Ten Bosch—named for a Dutch royal palace—greeted its first visitors in the wake of the real-estate asset bubble, and even amid its opening hype it never achieved anything close to the projected 5 million visitors a year. The idea of a theme park in far western Japan devoted to the history and culture of Holland is not quite as preposterous as it seems. In the 1630s, the shogunate in Tokyo, for fear of corrupt foreign influence, closed the country’s borders to the outside. For the next 200 years, Japan’s only relationships with the rest of Asia were with Chinese and Korean traders, and its only link to the larger world was with the small Dutch garrison of Dejima, a fan-shaped island in Nagasaki Bay built on reclaimed land. (In the contemporary West, the island is perhaps best known as the claustrophobic setting for David Mitchell’s novel  The Thousand Autumns of Jacob de Zoet.) Over those 200 years, the connection to Holland was of great importance to the isolated Japanese, particularly when it came to science and technology. The Dutch smuggled in knowledge of the Copernican Revolution, the secrets of anatomy, and the first stirrings of electricity. At the height of Japanese postwar wealth and confidence, then, the idea took root to re-create a Dutch village to scale. It would be equal parts homage to the origins of Japanese technology— which was, in the 1980s, at the world’s forefront—

Robotics_2.indd 358

and a clear indication that Japan’s research and dedication had far surpassed its initial inheritance. The park never lived up to its grandiose expectations, though it did host Michael Jackson on two occasions. In 2010 it was taken over by a budget-travel company called HIS, and in 2011 the company reported that the park had its first profitable year. The essence is still nominally Dutch—with omnipresent wood-shingled windmills, paint-your-own clogs, and dozens of cheese shops—but the most recent incarnation of the venue has extended the park’s foreign flavors to favoring a placid pan-European menagerie, mostly in the form of Italian restaurants. But naturally, that pan-European aesthetic has been extensively Japanified: The cream cheese (advertised as the number-one cheese in the park, beating out Gouda at number two and smoked Gouda at number three) is served in cubes and topped with bonito flakes, and, in line with the Japanese belief that there is nothing that cannot be improved by the attachment of 11 million LEDs, the nighttime park now blinks with energy-efficient psychedelic precision. So upon first inspection, the park looks like any midsize Dutch city whose well-swept, empty expanses are desultorily populated with Asian tourists, most of them Chinese on bus tours. If the park in its current form has a post-Dutch inclination, it is toward technology. Almost all the attractions promote themselves as modules of next-generation entertainment. The park now counts among its enticements: the Horizon Adventure, in which one can watch, from seismic seating, medieval Holland destroyed by a great flood thanks to a combination of Imax-ish graphics and an LED-lit fountain show; an all-hologram K-pop ensemble; a digital aquarium in which children can draw their own

10/2/2020 12:33:48 PM

Robotic Living | 359

continued

marine creatures, scan the images, and watch their own sea life float in user-generated schools across the busy screen; something called a 5-D haunted house; and a  taiko  videogame, like a Japanesedrumming Guitar Hero played on a 300-foot 3-D projection, of Super Bowl-halftime-show quality, on the side of an otherwise austere neoclassical palace. This is all part of the park’s futuristic bent. “It will all be experiments,” one representative told me. “We will experiment with Segways,” he continued, “and experiment with drones. It will all be new things. We will experiment with new energy and new solar power.” But perhaps the greatest experiment they have set into motion is the Henn-na Hotel and its aspiration to reinvent the sacred Japanese tradition of service. Beside the check-in desk there was a little sign indicating that the robot porters were only available until 10 pm and that they served only the A wing. The ramp down to the B wing was lined with a kind of slatted screen. I noticed that the screens were sweating profusely, even dripping onto the carpet below. I reached out to touch one; it was cold and clammy, like a panel of defrosting robot bones. The ramp led in an L shape to an exterior door, and once outside I turned again down a path toward a cluster of two buildings. That Muzak version of the 2001 theme played on repeat, limping through the torpor of the humidity. My room was at the most distant end of the farthest corridor. It seemed clear I was the only person in the entire wing. I touched the blinking lozenge that said Scan, and a blue light shone forth from a little black surveillance globe. The door clicked open. The room was quite large by the standards of Japanese hotels—you could hold your arms out and spin and not touch all the walls—and as I walked in, the lights snapped on in an orderly fashion. I was startled by a tinny little-girl’s voice, which spoke to me in Japanese

Robotics_2.indd 359

from a plastic pink robot doll at the bedside. It was mostly head, a large block that curved like an inverted molar, or perhaps a diseased turnip, with two yellow antennas emanating from the crown. On the forehead, the robot had three black hearts in a row, as though it had been branded the property of a good-natured love cult. What I had felt since my arrival at the empty robot hotel was a very slight unease, but nothing so far had had the nauseating force of the actual uncanny; everything had seemed algorithmically Lynchian, without genuine perversity or dislocation. The branded turnip-tooth, however, glowed with a faint menace. It chatted with me as I approached. There was a little drawing of how to properly massage the pink tooth-turnip’s forehead to encourage its silence. A small laminated card in front of the robot advised me that it was called Chu-ri-chan. The card, which represented the guest as a disembodied green head, suggested one call out Chu-ri-chan’s name, to which it would reply nandeshouka, or “May I help you?” “If,” the card continued, “red LED is ON of neck, please talk following sentences.” The sentences included questions about the time, the weather, the room temperature, the weather tomorrow, the prospect of turning the lights on and off, and the offer of a “morning call.” The bottom line referred either to Chu-ri-chan, who could only reply in Japanese, or to the authors of the card itself: “*We are still studying English version at this time.” I manipulated the bedside tablet for a few minutes, skipping through the options to call for help or watch television, until I found the panel for turning off the motion-detecting lights. The chair of HIS believes that one of the faults of a traditional hotel room is that “it is cumbersome to turn on and off a lot of switches,” and in the Henn-

10/2/2020 12:33:53 PM

360 | ROBOTICS & ARTIFICIAL INTELLIGENCE

“Check-In with the Velociraptor at the World’s First Robot Hotel” continued na’s pursuit of unprecedented efficiency that’s one more step it is saving its customers. In the dark, the light of the motion detector, a small coin of glowing indigo set into the center of the ceiling, filled the room as if with an odorless blue gas. The city of Nagasaki, with a long, svelte harbor that opens out to the East China Sea, has an understandably mixed relationship with technology. It was not only from here that Western scientific learning entered Japan on a slow drip, but it was also the source of Japan’s first sophisticated armaments, as well as its first modern industries— including the shipbuilding dynasty that became Mitsubishi and the mining of seabed coal from wholly industrialized islands. It’s the long-standing concentration of heavy industry that put its port on the short list for the atomic bomb. Since August 9, 1945, when the bomb killed more than a quarter of the population, the city has turned itself into a beacon of the international peace movement. Still, the population is proud of the role its technologies played in the modernization of Japan. It’s the right place for a robot hotel. If the early years of the Meiji Restoration, in late 19th-century Japan, were symbolized by the technologies of shipbuilding and coal-mining, the postwar, exportdriven, economic-miracle decades of the 1950s and ’60s were represented by steel and heavy machinery, and the 1980s were about the Walkman and personal electronics, then the 1990s centered on the turn toward robotics. The aim with robotics has been manifold: Roboticized industry is the only way for Japan to keep pace with the manufacturing in such cheaplabor states as China; it stands to prop up the

Robotics_2.indd 360

steady losses in Japanese productivity through two lost decades of economic stagnation; robot babysitters might cover childcare shortages and free up women to return to the workforce; and it’s one way not only to provide for a rapidly aging population, where the median age is projected to reach 53 by 2050, but also make up for a projected population shrinkage. More than one observer has commented that the appeal of the robot workforce isn’t particularly surprising for a country historically hostile to immigration. But early work in the assistive industries has led to advancements in what has recently been called affective robotics— that is, robots with whom we might have emotional experiences. This research has led to some of Japan’s most striking consumer items, from Sony’s Aibo robot dog to Paro, the eldercare robot seal, and, just last June, SoftBank’s Pepper, an android advertised as the world’s first emotional robot. It sold out within one minute of its introduction. The check-in robot staff at the Henn-na Hotel are the work of Kokoro, a Tokyo-based subsidiary of Sanrio, the parent company of Hello Kitty. One of its robotics engineers told me that Japan has been at the vanguard of robotics because the Japanese—who tend at once to see the inanimate as lifelike (even used sewing needles possess souls) and human behavior as frequently robotic (as so many exchanges are so carefully scripted)—don’t see an inherent conflict between humanity and robots. “There’s the expectation that robots will peacefully integrate with human society. Growing up, kids see robots in manga and anime that are friendly and integrated into human groups.” Kokoro, which means “heart,” started by making small robots and robot animals—a panda, a tiger, a dinosaur—that it sold as entertainment

10/2/2020 12:33:57 PM

Robotic Living | 361

continued

or to museums. It branched out from there into humanoid robots, and the Japanese woman at check-in is one from its popular Actroid series. Most of these casual Actroids are rented out as information kiosks; they’re not programmed for anything as abstract as vitality but are specifically tuned for very particular functionalities. Kokoro’s newsletter features descriptions of the various models, which include a professor robot, a “Beautiful Woman Robot,” and a dental patient robot. Some of them—including the androids the company makes for famous Osaka University robot researcher Hiroshi Ishiguro—are built expressly for the purpose of monitoring the way people interact with lifelike automata; one of the first experiments Ishiguro performed involved watching how his own daughter would react to her robot double. The idea is that the Japanese will increasingly find themselves interacting with robots, so a whole branch of human-computer study has unfolded to smooth over the exchanges they will inevitably have with, say, their robot dentist. The Actroid and the velociraptor at the Hennna have been built and tuned for hospitality. The Henn-na Hotel has been pitched by its development team, which includes not only HIS but a lab at the University of Tokyo and Kajima Corp., as part of the necessary effort to improve efficiency and productivity by reducing labor costs—without, ideally, sacrificing a pleasant experience. The chair of HIS, Hideo Sawada, is revered in Japan for having essentially invented budget travel, mostly through cheap packaged tours that take advantage of economies of scale. Sawada-san told me, in an interview at the company’s Tokyo headquarters, that he splits his time between his home in that city and the five-star Hotel Europa in the park. He initiated the Henn-na project in part because he believed that automation could cut hotel personnel costs to a third or even a quarter of what they are

Robotics_2.indd 361

now. The entire operation would be rationalized and streamlined. “Check-time is much shorter with robots,” he explained to me, “because no guests are complaining or asking questions.” So in addition to the customer being relieved of the effort one might spend complaining, further benefits would redound in the form of time saved. Right now the check-in process averages five minutes, but in the future, Sawada-san hopes, they will cut it to three. And those three to five minutes will ideally be more enjoyable than the usual drudgery that attends the check-in process. “Since we are a park operator, we wanted to make it entertaining, so we made a robot dinosaur. We’re evolving the robot dinosaur to communicate better, to make a joke with the customers or wink as they leave.” There is quite a bit more that the robot dinosaur can do—though this was all he was going to say—so he hoped I would return next year. “We’re only using 40 percent of the robot’s functionality right now,” he said. “We don’t want it to be causing trouble.” Next door to the Henn-na Hotel is the Aura macrobiotic restaurant, where vegetables are held captive in a laboratory-lit display nursery. The restaurant, like so many Japanese establishments, maintains a server-patron relationship of approximately one to one, and when a lone rivulet of coffee dripped over the side of my cup, the maître d’ bolted over with a rag to curtail its slide before it moistened my finger. The hotel itself, needless to say, lacks that obsessive human-labor approach when it comes to its guests, but for the press HIS makes available a passel of enthusiastic attachés. Mine is Takadasan, who arrived dressed in a black salaryman’s suit over a striped shirt with no tie to give me a tour after the macrobiotic breakfast. My translator, Matsudasan, was a Nagasaki retiree I’d met through an

10/2/2020 12:34:02 PM

362 | ROBOTICS & ARTIFICIAL INTELLIGENCE

“Check-In with the Velociraptor at the World’s First Robot Hotel” continued Internet translator board. She had brought her granddaughter to Huis Ten Bosch three years earlier and was keen for a reason to return to see the much-publicized new investments HIS had made. She’d worked intermittently as an interpreter for various artists and antinuclear activists, and she told me it would be her pleasure to assist me for free if I’d pay for her stay at the Henn-na. The tour began at the furry greeter robot, the scaled-up version of the tooth-radish in my room. Takada-san introduced the robot as Chu-ri-chan, which Matsuda-san explained was neither a Japanification of churro nor churlish but actually the rendering of tulip. This was Little Tulip, in honor of the Dutch association. Chu-ri-chan, Takada-san conceded, did not do much. Takadasan turned to the robot cloakroom, which was designed with enough security to safeguard gold bars. This was a peculiar selling point in Japan, where you could likely leave a gold bar lying out on the counter at the Starbucks in Shinjuku Station and, hours later, return to collect it. But the point was precisely the compensatory overkill: This might not be the service you’re used to, but what it lacks in traditional comfort it makes up for in a new kind of rigor. In the future, Takada-san explained, there would be three check-in robots, but for now the small mute bot with the radiant irises was just a placeholder. “Now,” he said, “two robots are welcoming you.” The female android cocked her head and narrowed her eyes, and the velociraptor raised its talons. Takada-san took me, in slow, patient detail, through each step of the streamlined checkin process I had negotiated the night of my arrival. I asked if I might see the room behind the checkin desk, the robot backstage. Takada-san pulled

Robotics_2.indd 362

at his collar and said that it was closed. When I visited Kokoro later in the week, I understood why: The motion of its robots is powered by compressed air—internal motors would make enough sound to remind a guest of the mechanics inside—and they’re thus hooked up by hose to little colostomy refrigerators hidden backstage. As a diversion, Takada-san proceeded to walk us past the ikebana flower arrangement in the middle of the lobby—which was being pruned and restocked by a large, coordinated team of black-vested human hotel employees, mute and nearly invisible, like Bunraku puppet-theater stagehands—to the information station, an LCD touchpanel advertising park activities under the vacant gaze of a portly maid robot called Sacchan. We stood near Sacchan while one of the Bunraku stagehands vacuumed the floor around us with a long, slender, ergonomic dust-buster. Takada-san explained that the hotel had 72 rooms and only 10 human staff members. The staff members were present for emergencies, should they arise, and cleaning. In the future, the hotel hopes to replace even these with cleaning robots. “For a robot,” Takada-san said, “bed making and cleaning bathrooms would be very difficult. For now, it is humans. In the bathroom, there are wet things, and maybe the wet things would destroy the robots.” I asked about the vacuuming then, suggesting that the Roomba is perhaps the world’s most successful and widespread domestic robot. Takada-san nodded. “Yes, there are robots for floor-cleaning, popular in the West. But there might be some small dust left. If it’s your home, that’s OK, the rough cleaning of robots. But it’s a

10/2/2020 12:34:06 PM

Robotic Living | 363

continued

hotel, so guests expect no dust left to be around anywhere.” Takada-san’s final presentation was of the robot porters, which looked like seatless, antiaerodynamic go-kart thrones for small haughty children. The rules, printed on the side of each porter robot, stipulated that one must follow directly behind or the robot would halt: “The robot stops in order not to get separated from you.” Furthermore, there could be placed no unstable stack of luggage or pets on the robot; there was no riding or stepping on the robot; and, finally, the hotel took no responsibility for the robot’s actions or consequences. It took three of the silent Bunraku stagehands to get the robot porter operational, but once it was ready, Takada-san punched in a room number, secured the safety chain across the cargoless hold, and set it in motion. The three of us walked very slowly behind it, our hands clasped behind our backs. Once safely under way, the porter robot began to make a noise I soon recognized as music. The screen blared a theme song whose only lyrics were, in English, “Surprise! Surprise! Surprise! Surprise! Surprise!” The theme played over short promotional films about flowers, tulips, windmills, fireworks, and K-pop holograms. We took our silent paces in the porter robot’s shadow. At a certain point, halfway down a ramp, Takada-san held up his arm for us to stop, and as soon as the porter got 5 feet from us, it paused to wait. It was hard not to read into its slightly increased whirring a muted frustration, but presumably I was just projecting. The porter went through a glass door and paused in front of the first room. On the screen a page flashed asking for evaluation of its performance; Takada-san tapped the icon for four stars. The robot porter skulked off on its own to do a U-turn

Robotics_2.indd 363

at the end of the hallway. I asked why the robot porter didn’t deserve five. He thought for a moment. “Always room for improvement.” The Japanese concept of superior service is called  omotenashi. It’s an untranslatable word, but the implication is a kind of service that’s so seamless as to be invisible—a relationship where the guest’s needs are anticipated and met as if by magic. As the robotics engineer at Kokoro put it to me, “If you have bad, superficial service, when you say you’re full, the server will still say, ‘Eat, eat.’” He mimed someone pushing food into his face as he groaned and patted his considerable belly. “That’s not good service. But omotenashi is really about having slippers there for someone when they take off their shoes. It’s recognizing the customer’s needs in that sense, not with the ‘everything with a smile’ sense. Of course they want a smile—but not in a fake way, in a subtle way.” When I asked both the engineer and Sawadasan about how they square the Henn-na model with the traditional virtues of omotenashi, they explained that they fit together insofar as each addresses a different need. Henn-na automation will never approach the standards of, say, Tokyo’s legendary Hotel Okura, where staff members fold origami cranes and frogs and leave them on the bed as a welcome. But if that were the only model of hotel in the world, backpackers and middleclass tourists would be out of luck. The Henn-na is catering to a different market, and, Sawada-san told me, if the model is successful, they plan to export it. He thinks it is likely to soon become the norm at all three-star and four-star establishments. At a five-star hotel, like the Hotel Europa, the personalized touch will remain central. He used the analogy of banks. Wealthy people have private banking, while everybody else is pretty happy with

10/2/2020 12:34:11 PM

364 | ROBOTICS & ARTIFICIAL INTELLIGENCE

“Check-In with the Velociraptor at the World’s First Robot Hotel” continued ATMs. On some level, then, the planners behind the Henn-na point to a future where the wealthy have personal interactions—not only with receptionists but with babysitters and health care providers—and everyone else has to make do with a machine. The machines will sufficiently lower expectations such that resourceful guests can be up-sold “artisanal” human interactions. In this view, the Actroid and the velociraptor are merely robotic camouflage to make radical cost cutting more acceptable. The room, for example, could be cleaned, or extra towels could be secured, for a fee. But because a velociraptor stood there and grinned while you checked yourself in, you’re theoretically inured to the bare-bones amenities provided. The full picture, though, is a little more complicated. In the brochure, there are a few times when the planners stray from the stolid productivity line and reveal that they aspire for the robots to also be available for some sort of connection. “Enjoy conversations,” the brochure proposes, “with a humanly kind of warmth, while they work efficiently.” The idea isn’t new to the nation that invented the Aibo, Sony’s robot dog, which in the 15 years between its introduction and its recent retirement— Sony has ceased to produce replacement parts or to service the extant creatures—was hard-coded to convey emotions including curiosity, anger, and love. The dogs, researchers found, would provide diversion and solace to aging pensioners. There’s something about this that puts a lot of us off. Sherry Turkle and other robot-human observers have made the point that a relationship with a robot pet is purely one of projection, in which one’s own emotional state is externalized and then

Robotics_2.indd 364

experienced as if it belonged to the machine. She argues that such unreciprocated exchanges can never really satisfy, but those conclusions are belied, or at least complicated, by moving scenes of Buddhist Aibo funerals. In an island nation where reticence is the norm, even mechanically mediated exchanges can have great expressive— and thus therapeutic—value. It’s one thing to assign such an emotional errand to an Aibo, which only has to successfully ape a dog, and another to extend it to a serviceindustry figure. The traditional Japanese hospitality experience is much closer to staying at the home of a doting quasi-intrusive aunt than to staying at a Sheraton. But in a way—and this, after all, was the point of my anxious visit—the extreme social complications of hospitality-industry interactions make this both a perfect and a mind-boggling test case for the future of robots—one that really challenges our definition of what humans even are. One of the best accounts of contemporary “affective robotics” is the first part of Turkle’s 2011 book, Alone Together. Turkle is a keen observer of all the ways in which robots can be useful in our emotional lives: A robot like an Aibo can lead the reticent to have feelings they might not otherwise have or can perform a kind of neediness that makes its caregiver feel nourishing, valued, and important. She also makes good arguments about how excitement over robotics can paper over public conversations that really ought to be political, as when the simple availability of technology that is good enough to surpass current levels of care for the elderly leads us to accept robotic eldercare as inevitable—rather than take on the genuinely difficult question about how we choose to allocate human resources to the weak.

10/2/2020 12:34:16 PM

Robotic Living | 365

continued

But for Turkle, interactions with other people are “authentic” in a way that interactions with robots will only ever be “performative.” “I am skeptical,” she writes. “I believe that sociable technology will always disappoint because it promises what it cannot deliver. It promises friendship but can only deliver performances.” So: Robots are merely aggregations of programmed behavior, a brute series of rote roles, while humans are—what, exactly? The closest Turkle comes to a real response is to propose that human-on-human interactions are more substantive because people are difficult and you can’t turn them off. This makes sense as an argument about child development—presumably children will grow to tolerate frustration better if they can’t just throw their annoying Aibo in a closet—but it’s too pat an argument about adults, and this is something the Henn-na makes unusually clear. An interaction at a check-in desk is necessarily insincere in the sense that it’s a performance; the person behind the desk is paid to be nice to you. It’s a fake interaction. But at the same time, it’s totally unimportant that it’s fake, because one knows that the same objective—checking in— could easily be accomplished with a touchscreen. It can also be fake in a very real way; it can feel nice, when tired and on the road, to be treated with such obeisance. It’s fake, but it’s also real, and it’s in part real because it’s so successfully fake. Given the layers upon layers of this interaction, it makes no sense to try to talk about something as pure as sincerity versus performance. This felt particularly true in Japan, where prosthetic devices aren’t tainted with the idea of artificiality. Even a sedate, responsible antinuclear activist guide from the Nagasaki prefectural association whom I met with wore the sort of pupil-expanding contact lenses that bring young Japanese women one step

Robotics_2.indd 365

closer to baby-panda-hood. Not incidentally, the equally confusing opposite happens on the other end of the technological spectrum, in the sharing economy. At first blush it seems like the two phenomena are utterly unalike. One technology reduces the human factor while the other expands it. How many of us have had an Airbnb experience where a very kind host just won’t shut up and let you go to sleep? It’s just as possible to suffer from overpersonalization as it is to suffer from depersonalization. Turkle seems to expect that any interaction is made better when human interaction is maximized. It’s hard to think of anything more exhausting. There are some interactions that are just easier cold. No touchscreen would’ve spent 20 minutes guessing about a hotel’s location just to save face. Takada-san backed away, bowing compulsively. And Matsuda-san, whose restrained but inquisitive presence had been at once a great pleasure and something of a burden—as I was only paying for her expenses, I felt as though it was my job to provide her with a diverting and edifying time—asked what we would be doing next. Frankly I just wanted to go to my room, plant Chu-ri-chan facedown in a pillow, and be alone; I had taken a 16-hour tour of the new Unesco sites the previous day, and I felt as though I could use some time interacting with mute screens. “Do you want me to go home now?” Matsudasan asked. I didn’t know what to say. I hadn’t the faintest idea what she wanted, and I knew that if I asked she would say she was happy for us to do anything I liked. I didn’t want to entertain her, but I also didn’t want to hurt her feelings. I settled on a version of “I won’t keep you any longer.” She nodded and departed for her home in

10/2/2020 12:34:20 PM

366 | ROBOTICS & ARTIFICIAL INTELLIGENCE

“Check-In with the Velociraptor at the World’s First Robot Hotel” continued Nagasaki. A few hours later, she sent me an email. “I enjoyed staying at the state-of-the-art hotel,” she wrote. “I owe you a lot to have a new experience, although I am an analog human.” Outside it was squalling, the sheets of rain flying horizontally against the glass panes. I had no more patience for the repetitive attractions of the Huis Ten Bosch park—the elaborate descriptions of which my human editor cut, gently reminding me once again that her robot replacement would likely find all of this superfluous and that her special role as a human was to remain willing to read something because it was companionable, even if at times she wished it made a stronger point. But I was just coming to the point. I sat around in the lobby a little bored. The Bunraku stagehand staff walked around doing things that might be dangerous or unsuitable for robots; they watered the plants and attended to the ikebana and dustbustered up invisible mites. Now and then someone would come and check in at the robot-overseen consoles, but they never spoke to the stagehands. At a certain point in the afternoon, the stagehands were at an obvious loss for further work, and three of them took the robot porters out to stretch their gears. The rain shattered against the windows, which reverberated from every direction with blaring choruses of “Surprise! Surprise! Surprise! Surprise!” One stagehand-porter pair went up the ramp to the second floor, another down the ramp to the first. One of the stagehands held back, but that robot porter for some reason continued on, disobeying the hotel’s first law of robotics. The whole tableau was like a scene from  Blade Runner on Ambien. The comedy of the velociraptor had faded, and I felt lonely and very far from human comfort. It

Robotics_2.indd 366

wasn’t that a robot can’t be a convincing friend because it’s only performing. It’s that, for now at least, there’s no plasticity or surprise to the performance. For the thing a human can do that a machine can’t yet is reprogram itself. As far as our robot future goes—assuming an automated superintelligence stops somewhere short of human enslavement or genocide—we’re likely to find that we greatly miss some human interactions and aren’t sad to see others go and that we will, in turn, constantly have to update our shifting sense of the border between the artificial and the authentic. What will be most important is that human interaction be equitably distributed—but that, as in many cases like this, is not a technological question but a political one. I wished I were over at the five-star Hotel Europa, where at least I could tip a bartender to talk to me. As I sat there and watched the Bunrakustagehands walk their robot porters, a young Scottish woman came in, stopped a stagehand, and began to cry. A Japanese tourist who spoke a little English ran over to help her. She had been traveling with her Japanese friend, and her Japanese friend had stalked off in a fit, abandoning her. She had a reservation for the night but not enough money to pay for the whole room, and she had no credit card. She was desperate, she said, and it had been her dream to come stay in the robot hotel. She had only about two-thirds of what she needed to check in. She begged for a discount, but the Bunraku stagehand only gestured inertly at the velociraptor, as if to indicate that he had no more control over the situation than the mechanical dinosaur did. The velociraptor shrugged and growled. She begged everyone assembled to give her money, but after having listened for a while I found her story increasingly fishy and wasn’t moved by her plight.

10/2/2020 12:34:25 PM

Robotic Living | 367

continued

She seemed to me like a lying human.

It was a tremendously touching gesture.

Finally, after many tears, one of the Bunraku stagehands came out from a back room with a piece of paper. He had spent 20 minutes calling around to all the hotels in the area, and he had found the one whose rooms cost exactly the cash she could afford. He had made her a reservation there and would drive her through the rain in his friend’s car.

She was, however, unconsoled and shook her head. “But,” she sputtered at last, “I came for the robots.”9

As the above article explains, the introduction of the first robotic hotel may not have met the expectations of some guests, but the fact that such an experiment in high-tech customer service exists demonstrates the ongoing interest in the technological advancement of the domestic realm. The IoT-enabled homes and businesses of the 2020s, and novelties like Japan’s robotic hotel, are not truly the realities that futurists of the 1960s envisioned, but they may represent the evolution of that vision. In still pointing toward a future in which homes are “smart” and robotic domestic servants cater to human needs, the current generation of engineers and designers are still trying to make twentieth-century visions of humanity’s technological future real, and in so doing may be laying the groundwork for yet-to-be-imagined technological innovations.

CONCLUSION Smart homes are neither smart nor robotic, but development in robotics and artificial intelligence may soon allow a fusion that could incorporate true machine intelligence in the smart homes of the future. The evolution of smart home technology may also point to other ways in which robotics could have practical uses in society. Imagine, for instance, robotic systems designed to replace human receptionists and other customer service professionals. These jobs involve processing requests from customers and obtaining data for customers, roles that are certainly vulnerable to robotic and AI replacement in the near future. The integration of robotics into service roles may therefore become a new frontier in the debate over automation and the future of work.

Robotics_2.indd 367

10/2/2020 12:34:29 PM

368 | ROBOTICS & ARTIFICIAL INTELLIGENCE

DISCUSSION QUESTIONS • Are smart home systems like Nest and Echo actually artificially intelligent? Why or why not? • Will smart homes become more common in the future? Why or why not? • What are some of the problems with using AI or robots to handle service occupations? Would such a development help or hurt society? • What are some of the dangers of the Internet of Things? Use examples from the text. Works Used “A Brief History of WiFi: Then and Now.” EERO. The Download. 27 Sept. 2016, blog.eero.com/abrief-history-of-wifi-then-and-now/. Braman, Lisa. “Cooking Through the Ages: A Timeline of Oven Inventions.” Smithsonian. 18 Nov. 2011, www.smithsonianmag.com/arts-culture/cooking-through-the-ages-a-timeline-ofoven-inventions-380050/. Hernandez, Daniela. “Before the iPad, There Was the Honeywell Kitchen Computer.” Wired. 22 Nov. 2012, www.wired.com/2012/11/kitchen-computer/. Kerrigan, Saoirse. “The History Behind the 15 Kitchen Appliances That Changed the Way We Live.” Interesting Engineering. 13 Apr. 2018, interestingengineering.com/the-history-behindthe-15-kitchen-appliances-that-changed-the-way-we-live. Lewis-Kraus, Gideon. “Check In with the Velociraptor at the World’s First Robot Hotel” Wired. 2 Mar. 2016, www.wired.com/2016/03/robot-henn-na-hotel-japan/. Pardes, Arielle. “The WIRED Guide to the Internet of Things.” Wired. Condé Nast. 10 Feb. 2020, www.wired.com/story/wired-guide-internet-of-things/. Ross, Rachel. “Who Invented the Refrigerator?” Live Science. 8 Feb. 2017, www.livescience. com/57797-refrigerator-history.html. Spicer, Dag. “The Echo IV Home Computer: 50 years later.” CHM. Computer History Museum. 31 May 2016, computerhistory.org/blog/the-echo-iv-home-computer-50-years-later/. “The Security and Privacy Issues That Come with the Internet of Things.” Business Insider. 6 Jan. 2020, www.businessinsider.com/iot-security-privacy.

Robotics_2.indd 368

10/2/2020 12:34:34 PM

THE FUTURE OF WORK How Robotics Could Impact the Future of the Working World (2019)

25

Introduction One of the major public concerns, when it comes to robotics and AI, is the impact that these technological innovations will have on industry and the welfare of workers. This is a prescient concern, given that automation has already resulted in massive job losses in certain sectors. But the debate is frequently sidelined by the perception that humanity faces an enemy in the robots themselves, when it is governments and business leaders who would engage in automation at the expense of workers. Topics Covered in this Chapter Include: • Automation • Unemployment

This Chapter Discusses the Following Source Document: West, Darrell M., “Brookings Survey Finds 52 Percent Believe Robots Will Perform Most Human Activities in 30 Years,” Brookings, June 21, 2018

Robotics_2.indd 369

10/5/2020 11:12:56 AM

370 | ROBOTICS & ARTIFICIAL INTELLIGENCE

A

s discussed earlier in this book, the automobile industry was most responsible for bringing automation to America. In fact, the term automation is typically attributed to Ford production manager D.S. Harder. Ford was the first to introduce the assembly line, to the detriment of many industry workers, and automobile companies were also the first to bring in robots to replace human workers. Unimate, the first industrial robot, was first put to work in 1959 at the General Motors die casting plant in New Jersey. By the late 1960s, robotic arms were becoming standard in the industry and, in the 1980s, the pace of automated job replacement greatly increased, as advancements in industrial computing made it possible to manage entire lines of robotic workers from a central terminal.1 According to a report from Nikkei Asian Review, for every robot installed in an automotive plant, six to seven workers lose their jobs. If this figure is correct, the thousands of robots added to automotive production lines between the 1950s and the 2020s have resulted in thousands of job losses. The industry has been automated to the point that there are fewer jobs available, and those remaining pay lower wages.2 Automation is a present and continuing threat to the welfare of US workers and, as computers and robotics has advanced, this threat has only become more immediate.

The Present Threat According to a 2019 study from Oxford University, more than 20 million jobs around the world may be lost to automation by 2030. The researchers estimate the current global workforce of robots at around 2.25 million and note that the number of robots in use in industry has increased by a factor of three over 20 years. Even more alarming, the researchers found that this trend is accelerating, with the loss of jobs to automation doubling in only eight to nine years between 2010 and 2019. One of the driving factors in their prediction is the decreasing cost of the equipment needed to make robots, making them a more practical option. While corporate owners and investors might benefit, the Oxford team found that the increasing pace of automation would greatly disadvantage many Americans already at risk. As the researchers stated: “As the pace of robotics adoption quickens, policymakers will be faced with a dilemma: although robots enable growth, they exacerbate income inequality. Automation will continue to drive regional polarization in many of the world’s advanced economies, unevenly distributing the benefits and costs across the population. This trend will intensify as the impact of automation on jobs spreads from manufacturing to the services sector, making questions about how to deal with displaced workers increasingly critical.”3

Robotics_2.indd 370

10/2/2020 12:34:42 PM

The Future of Work | 371 While the threat of automation arose first in the automotive industry, other industries are now experiencing the brunt of automation pressure. Oxford researchers estimate that 40 percent of the automation pressure in the United States between 2011 and 2016 occurred in industries other than automotive manufacturing, while the automotive industry accounted for 14 percent. In contrast to the United States, between 2011 and 2016, Japan saw a 22 percent reduction in the use of robots to manufacturing automobiles and only a seven percent increase in the use of automation elsewhere. Germany, likewise, saw a one percent reduction in the use of robots in automobile manufacturing, and only a 27 percent increase in other industries. Oxford researchers also believe that automation will create as many new jobs as it eliminates over the course of the next decade, as new opportunities in transport, construction, maintenance, and administration are created as jobs are lost to automation. The fact that economic growth through automation impacts those at the upper end of the income spectrum disproportionately means that automation is ultimately an economic negative, creating more problems than it solves except for those at the upper end of the income spectrum.4

Many large facilities, such as the electrical generating system above, require minimal human supervision.

Robotics_2.indd 371

10/2/2020 12:34:46 PM

372 | ROBOTICS & ARTIFICIAL INTELLIGENCE A 2017 report from the McKinsey Global Institute indicated that as many as 73 million US jobs are threatened by automation and may be eliminated as early as 2030. Researchers at McKinsey found that about half of all work activities in the United States are vulnerable to automation. This could mean between 39 and 73 million job losses in the United States, while researchers believe that only 20 million displaced workers could be “fairly easily” shifted into new occupations. This leaves 16 to 54 million workers who would need retraining to regain work opportunities. Globally, the researchers estimate that as many as 800 million workers could be displaced and as many as 375 million would need new training for different occupation opportunities: “Jobs most susceptible to automation are physical ones in predictable environments. Those include workers who operate machinery, prepare fast food, collect and process data, originate mortgages and do paralegal and accounting work. Jobs safest from the effects of automation involve managing people, high-level expertise and unpredictable environments. They include engineers, scientists, health care providers, educators and IT professionals, as well as gardeners, plumbers and elder care providers.”5 The cost of recovering from the loss of so many jobs will, in part, be covered by the economic growth that automation will help to bring about. However, both the McKinsey and Oxford researchers found that it was unlikely that economic recovery would be even and that the poor and underemployed would suffer most.

How Do the People See the Issue? Overall, Americans and citizens around the world have developed negative perceptions of robotics and AI, and this is a direct result of the way that automation and changing workplace realities have been handled by regional leaders and the economic elite that, in most countries, exercise and outsized influence on politics and law. For instance, in a 2018 study conducted by Brookings Institute, researchers found that a majority of people are skeptical about the future of robotics and its impact on the human economy and workforce.

Robotics_2.indd 372

10/2/2020 12:34:50 PM

The Future of Work | 373

Industrial robots handling food production in a bakery. By KUKA Roboter GmbH, Bachmann, via Wikimedia.

Robotics_2.indd 373

10/2/2020 12:34:54 PM

374 | ROBOTICS & ARTIFICIAL INTELLIGENCE

“BROOKINGS SURVEY FINDS 52 PERCENT BELIEVE ROBOTS WILL PERFORM MOST HUMAN ACTIVITIES IN 30 YEARS” by Darrell M. West Brookings, June 21, 2018 Source Document

Fifty-two percent of adult internet users believe within 30 years, robots will have advanced to the point where they can perform most of the activities currently done by humans, according to a survey undertaken by researchers at the Brookings Institution. The poll also found people divided 32 to 29 percent regarding whether the U.S. government should set up a Federal Robotics Commission to regulate robot development and usage. This survey was undertaken by researchers at the Brookings Institution through an online U.S. national poll of 2,021 adult internet users between June 4 and 6, 2018. It was overseen by Darrell M. West, vice president of Governance Studies and director of the Center for Technology Innovation at the Brookings Institution and the author of The Future of Work: Robots, AI, and Automation. Responses were weighted by gender, age, and region to match the demographics of the national internet

Robotics_2.indd 374

population as estimated by the U.S. Census Bureau’s Current Population Survey. Will Robots Take Over?

The survey asked how likely robots are to take over most human activities within the next 30 years. Nineteen percent feel this was very likely, 33 percent believes this is somewhat likely, 23 percent feel it is not very likely, and 25 percent were not sure. There were not significant differences by demographic group or region. People aged 35 to 54 were a bit more likely to believe this was very likely, compared to those living in the South or adults under the age of 35. Males and older individuals were the ones most likely to say this was not very likely to happen.

10/2/2020 12:34:58 PM

The Future of Work | 375

continued Federal Robotics Commission

Thirty-two percent believe the U.S. government should set up a Federal Robotics Commission to regulate robot development and usage, compared to 29 percent opposed and 39 percent who were unsure.

But there are some differences by age and region. Young people aged 18 to 34 years old were the most likely to want to regulate robots (39 percent), as were those living in the Northeast or West. Men, older individuals, and those living in the South were the least supportive of robot regulation. Comfort Level and Worries about Robots

We asked several questions regarding people’s impressions of robots. Sixty-one percent said they were uncomfortable with robots, while only 16 percent felt comfortable with robots and 23 percent were unsure.

percent felt their lives would become harder and 45 percent did not know. When asked how worried they were about robots, 61 percent said they were unworried, while 29 percent were worried and 22 percent were not sure.

When asked how common they thought robots would become over the next five years, 13 percent said very common, 32 percent said somewhat common, 26 percent felt they would not be very common, and 29 percent did not know. Kind of Robots Wanted

We asked about the kinds of robots that would interest them. Twenty percent were interested in robots that would help them clean house, 17 percent wanted robots that would provide home security, and only 9 percent were interested in a robot that helps to care for a child or aging relative.

Thirty-eight percent felt robots would make their lives easier in the next five years, while 17

Robotics_2.indd 375

10/2/2020 12:35:04 PM

376 | ROBOTICS & ARTIFICIAL INTELLIGENCE

“Brookings Survey Finds 52 Percent Believe Robots Will Perform Most Human Activities in 30 Years” continued

 How Much Would You Pay for a Robot

The survey inquired how much people would pay for a robot that handles routine chores. Fortytwo percent said they would pay $250 or less, 10 percent said they would pay between $251 and $500, 3 percent said they would pay from $501 and $750, 3 percent indicated they would pay between $751 and $1,000, and 3 percent were willing to

pay more than $1,000. Thirty-nine percent did not provide a figure. There were some differences by age. Young people under the age of 35 were more interested in inexpensive robots under the cost of $250 (47 percent) compared to those who were 55 years or older (37 percent).6

The Brookings Institute survey makes clear the level of fear that characterizes the debate on AI and robotics, but it also demonstrates that people, in general, have little reliable information about how AI and robotics will develop and will change their societies. As a result, opinions on robots and AI diverge widely. Similarly, a global study published by We Forum found that opinions were highly variable according to education and experience levels. Those with higher levels of education are more likely to feel positively about AI because they feel shielded from results such as job loss. Attitudes about retraining and transitioning to new careers were more positive among individuals who had overall positive perceptions of AI and robotics and who, therefore, perceived a positive social or economic impact at the end of the transitional problems that automation has brought to societies.7 Negative perception of robotics and automation were more pronounced in countries like the United States, where the loss of jobs due to automation has been more pronounced. Industries in the United States are vulnerable to automation in part because of the high spending power of corporations and policies against regulating businesses. Attitudes toward AI and robotics are, therefore, more negative in the United States than in countries where corporations have been limited in their ability to automate or have been forced to consider worker’s

Robotics_2.indd 376

10/2/2020 12:35:09 PM

The Future of Work | 377 welfare before instituting automation policies, such as in Japan and Germany. American attitudes are similar to attitudes in China, where widespread automation has occurred without taking steps to protect the Chinese people from its impact. Research from a variety of sources demonstrates American ambivalence about their future in an increasingly roboticized world. For instance, a 2017 Pew Research study found that 73 percent of Americans were either Pharmacological production is another example of automation in the workplace. By Flagrange, via Wikimedia. very or somewhat worried about machines taking over human jobs, while only 32 percent expressed some or a lot of enthusiasm about how this might change their society. Studies in the United States also indicate that familiarity with robotics and AI tends to lessen fear and negative reactions, whereas those who have little reliable information about the field tend to have the most overall negative impressions of how AI and robotics will impact the world.8

What to Do About It In a 2019 article for Forbes, science contributor Amy Sterling discussed the opinions of experts on how to address the number of job losses related to automation and uncovered a number of interesting ideas. For instance, Harvard University technology and public life specialist Susan Winterberg noted that workers who are unemployed due to automation typically experience a permanent 17–30 percent reduction in wages, even after finding work. This can have a dramatic impact on an entire family. Winterberg noted that studies indicate children of employees whose jobs have been automated are 15 percent more likely to repeat a grade in school. Winterberg suggests passing a law requiring companies to give mandatory early notification to employees whose jobs might be automated and forcing companies to be flexible with affected

Robotics_2.indd 377

10/2/2020 12:35:15 PM

378 | ROBOTICS & ARTIFICIAL INTELLIGENCE workers as they search for new work before their jobs are eliminated. Winterberg further calls for a national “observatory” that would track data on automation so that workers could follow developments in their field. MIT professor Paul Osterman suggests that the nation’s 1,600 community colleges could be used to retrain workers, with training programs subsidized by the government or by employers who wish to institute automation. Digital or nontraditional degree or training programs might also be developed that would be tailored to individuals experiencing automation-related job loss, which would give institutions opportunities to partner with companies in sectors experiencing job growth that might be able to absorb some of the fallout from automation. Sterling argues: “At least one thing is clear. Doing nothing is not an option. Entire industries have evaporated from the American economy. This will only expand in years to come. Rather than fight technology, we should embrace it and prepare workers whose fields move overseas or are learned by robots. Every human deserves the opportunity to learn skills that will carry him or her into the future. The alternative effectively holds disadvantaged members of our society back, building a future enjoyed by the few at the expense of the many. Both government and employers can and must do better.”9 There is no shortage of ideas about how to protect against the pressures created by automation, but nearly all the solutions require the participation of government and of industry. The culture shock of automation is not new in America but reflects the same loss of economic mobility that occurred in the late 1800s and early 1900s as America was transformed by the advance of industrialization. The change from agrarian to industrialist created America’s first wealthy corporations and individuals, and changes to the workforce are often managed in a way that prioritizes profit over workers’ welfare. Unless a more egalitarian way forward is taken, it is unlikely that Americans will avoid the worst of what automation could bring to American society.

Robotics_2.indd 378

10/2/2020 12:35:19 PM

The Future of Work | 379

CONCLUSION Job losses due to automation remains the most controversial and popular topic within the overall subject of robotics and artificial intelligence, as the loss of jobs to automation provides measurable ways to look at how technology has negatively impacted human welfare. However, the loss of jobs to automation is not a reflection of humanity’s struggle against machines, as it is often portrayed in fiction and popular media coverage, but a struggle that reflects the injustice of the world’s class dynamics. Just as the automotive companies that first introduced automation chose increased profit over the welfare of human workers, automation is still a human-vs-human struggle and is still driven by wealthy individuals, corporations, and the politicians who embrace the replacement of human laborers with mechanical alternatives.

DISCUSSION QUESTIONS • Why are Americans more afraid of automation than those living in some other countries? • Are American politicians to blame for the suffering experienced by individuals facing replacement by machines? Why or why not? • Is it more important to protect jobs or to embrace robotic innovation in the workplace? Explain your answer. • Should America adopt new laws on robotics and AI to protect labor markets from disruption? Why or why not? Works Used Anderson, Janna, and Lee Rainie. “Artificial Intelligence and the Future of Humans.” Pew Research Center. 10 Dec. 2018, www.pewresearch.org/internet/2018/12/10/solutions-toaddress-ais-anticipated-negative-impacts/. Davidson, Paul. “Automation Could Kill 73 Million U.S. Jobs by 2030.” USA Today. 28 Nov. 2017, www.usatoday.com/story/money/2017/11/29/automation-could-kill-73-million-u-s-

Robotics_2.indd 379

10/5/2020 11:46:39 AM

380 | ROBOTICS & ARTIFICIAL INTELLIGENCE jobs-2030/899878001/. “How Is Automation Perceived by the Public?” World Economic Forum. 11 Feb. 2020, www. weforum.org/agenda/2020/02/public-opinion-4ir-automation-ai. “How Robots Change the World.” Oxford Economic. June 2019, cdn2.hubspot.net/hubfs/2240363/ Report%20-%20How%20Robots%20Change%20the%20World.pdf?utm. Nakanishi, Toyoki. “For US Autoworkers, Robots Are the Job Killers, Not Trade.” Nikkei Asian Review. 19 June 2017, asia.nikkei.com/Economy/For-US-autoworkers-robots-are-the-jobkillers-not-trade. “Robots ‘to Replace up to 20 Million Factory Jobs’ by 2030.” BBC News. 26 June 2019, www.bbc. com/news/business-48760799. “The Impact of Robotic Process Automation on the Automotive Industry.” Medium. 12 Mar. 2018, medium.com/@cigen_rpa/the-impact-of-robotic-process-automation-on-the-automotiveindustry-6a0eabaf9ee1. Smith, Aaron, and Janna Anderson. “AI, Robotics, and the Future of Jobs.” Pew Research. 6 Aug. 2014, www.pewresearch.org/internet/2014/08/06/future-of-jobs/. Sterling, Amy. “Millions of Jobs Have Been Lost to Automation. Economists Weigh In on What to Do about It.” Forbes. 15 June 2019, www.forbes.com/sites/amysterling/2019/06/15/automatedfuture/#712620d1779d. West, Darrell M. “Brookings Survey Finds 52 Percent Believe Robots Will Perform Most Human Activities in 30 Years.” Brookings. 21 June 2018, https://www.brookings.edu/blog/ techtank/2018/06/21/brookings-survey-finds-52-percent-believe-robots-will-perform-mosthuman-activities-in-30-years/.

Robotics_2.indd 380

10/5/2020 11:47:39 AM

QUANTUM COMPUTING The Potential Future of Computing Technology (2019)

26

Introduction As of 2020, quantum computing is not yet real in any practical sense. The basic idea is to power computers using the properties of subatomic particles. At the subatomic level, matter behaves in unusual ways, and it has long been believed that if humanity could harness the power of these strange transformations of matter and energy, incredible things might be achieved. In the 1980s, scientists dreamed up the idea of creating a computer powered by quantum principles. Such an invention would be more transformative than any development in computer science to this point in history, and would enable human computing to transcend many of its current limits. Some consider quantum computing a physicist’s fever dream that will not likely come to fruition, while others believe that quantum computing is a realistic though distant future. In many ways, however, it might be that the effort to make quantum computing is its own reward, as this field of research could contribute greatly to human knowledge about the quantum realm, which is, at present, important primarily as a deus ex machina for science fiction literature and filmmaking. Topics Covered in this Chapter Include: • Computer science • Quantum physics • Superposition

This Chapter Discusses the Following Source Document: Bernardi, Gabriella, “AI Says We Can Teleport Qubits,” Cosmos, June 25, 2019

Robotics_2.indd 381

10/5/2020 11:13:41 AM

382 | ROBOTICS & ARTIFICIAL INTELLIGENCE

Q

uantum computing could prove to be the most revolutionary development in computer history. Or it might be nothing but a series of theories and fantastic dreams that never fully come to practical or working fruition. At the most basic level, quantum computing involves harnessing the mysterious powers of subatomic particles, the smallest-known units of matter in the universe, to power a computer. Such a concept might seem far-fetched, but the power of the subatomic realm has already been harnessed with the creation of atomic energy and atomic weapons, proving that the power of fundamental physical forces can be tapped into both to propel science and development and to create the most destructive forces the world has yet seen. Now scientists are attempting to replace electrical circuits and digital data streams with quantum equivalents. If this proves possible, computing power will leap ahead by many orders of magnitude, opening up the possibility of creating computers with processing power beyond the wildest dreams of computer science pioneers. If this occurs, a major “if,” the technology unveiled may carry robotics and artificial intelligence into new realms as well, unlocking the power to create artificial brains that not only meet but exceed the capabilities of even the collective power of all human minds combined. To understand this, a basic understanding of what quantum dynamics is and why it might be the technology of the future is necessary.

Hidden Worlds Quantum physics is a field of research that seeks to explain the behavior of matter and is based on evidence that physicists have discovered by studying how the fundamental particles of matter behave. Quantum physics begins with matter, which is the term used to describe anything with physical substance. Even thoughts and ideas can be considered material, as when a person thinks, the brain is flooded with electrochemical signals that, as far as scientists have been able to determine, are the same as the thoughts. Even the human mind, however non-physical it may seem, is arguably physical. This is important to remember when trying to understand quantum principles, because one of the strangest things about quantum mechanics and the quantum world is that the behavior of matter at this scale is vastly different from what humans observe at their scale of existence. One of the ways that scientists have learned about the universe is through a process called “reduction,” in which investigators attempt to break down complex things into their constituent parts. Imagine a house built of Lego blocks. The house is a thing on its own, as are certain parts of the house, like the floor, the walls, and so on. Each one of these things is made up of Lego blocks and so, to understand the nature of the Lego house, one might break

Robotics_2.indd 382

10/2/2020 12:35:35 PM

Quantum Computing | 383 down the house into smaller and smaller pieces, arriving at the conclusion that the whole thing is made of Lego blocks. That is the core, smallest possible component. But, the Lego blocks are also made of something. Lego blocks are plastic, which is a substance made of molecules. The properties of the molecules determine the properties of the material. Those molecules are also made of even smaller components, called atoms. Atoms are one of the most basic units of all matter, and everything in the universe (except for pure energy) is made up of atoms. Atoms, like the molecules they combine to create, have certain inherent properties that are passed on to molecules. Atoms can then be further broken down into even smaller pieces, called protons, neutrons, and electrons. This is the level of analysis that physicists called the “subatomic” world, the study of an entire level of organization and complexity beneath the atom, once considered the smallest possible piece of matter. But science has since learned that protons and neutrons can be broken down into even smaller pierces, called “quarks.” Quarks are extremely small subatomic particles that combine to create protons and neutrons. They come in different types, called flavors, that include “up,” “down,” “strange,” and “charmed.” At this level, there are also Leptons and Fermions and Bosons, which are able to create the physical forces of the universe, including electromagnetism and what is called the “nuclear force” that bonds protons and neutrons together to make atoms.1 One particularly important subatomic particle is the Higgs boson, a particle that physicists long believed existed but was not demonstrated until 2012. This particle, sometimes called the “God particle” by physicists, is so important because it is the particle responsible for giving the rest of the particles in the universe mass. Mass is a property of the physical world that gives a thing its physical scope and determines how that thing reacts to gravity and other forces. Mass is observed, by humans, whenever we see or feel something and recognize that it takes up space. The Higgs boson is the thing that creates mass and so makes physical objects occupy space.2 Quantum physics is the study of the strange, subatomic world and the forces that exist among these extremely tiny particles of matter. These interactions determine how matter works, and so also determine how the universe works. Everything that is seen or experienced in the universe relies on the unseen forces that govern quantum physics. The basic idea for quantum theories was first established in the 1920s by physics pioneers like Werner Heisenberg, Erwin Schrödinger, and Niels Bohr, all of whom now have fundamental principles of quantum behavior named in their honor. Quantum mechanics, the most basic field involving the study of quantum behavior, is not in itself, enough to explain the behavior of the universe. Rather, quantum principles need to be combined with other theories, such

Robotics_2.indd 383

10/2/2020 12:35:39 PM

384 | ROBOTICS & ARTIFICIAL INTELLIGENCE as Einstein’s famous Theory of Relativity. Together, quantum principles and relativity is a good, though incomplete, depiction of the physical world.3 The subject of quantum physics is vast and has implications that stretch into every aspect of science and human life. But the findings of scientists working on quantum problems are also very strange and force us to question many of the things that we think we know about the universe. For instance, it appears that, at the very smallest scales, particles are not simply particles. They also behave like waves, which are not discrete physical things but patterns of energy. At the quantum level, matter seems to break down to the point that a particle might be a particle from one perspective and a wave from another. What does this mean? Well, at the most basic level, it means that much of what we think we see or know about the universe is highly dependent on the way we study the universe.

Erwin Schrödinger posed the thought experiment now known as Schrödinger’s cat. By Nobel Foundation, via Wikimedia.

This property of the universe is demonstrated by a thought experiment known as “Schrödinger’s cat,” named after physics pioneer Erwin Schrödinger. Schrödinger’s experiment, meant to symbolize the behavior of particles in the universe, involves a cat in a box, which is considered alive or dead but does not fully become one or the other until someone looks in the box. Before that, the cat is both alive and dead, a sort of bizarre state of existence that is undetermined. What determines whether the cat is alive or dead is observation. The act of observing the cat forces the particles that make up the cat to adopt a state, and before this state is adopted, the cat is neither alive nor dead, nor does it really exist. Writing about this basic idea in NewScientist, Jon Cartwright said, “Where, when you aren’t looking at it, is a subatomic particle? A quantum physicist would probably answer: sort of all over the place. An unobserved particle is a wisp of reality, a shimmer of existence—there isn’t a good metaphor for it, because it is vague both by definition and by nature. Until you do have a peek. Then it

Robotics_2.indd 384

10/2/2020 12:35:43 PM

Quantum Computing | 385 becomes a particle proper, it can be put into words, it is a thing with a place. That picture seems utterly absurd. Yet many, many experiments exploring the microscopic realm over the best part of a century have reinforced the conclusion that when we’re not paying attention, the world is fuzzy and undecided. Only by looking at things, observing them, measuring them, do we make them recognisably ‘real.’”4 There is another unusual feature of the universe unveiled by quantum physicists. It appears that particles can be interlinked such that they might be separated by limitless amounts of space but still remain somehow connected. This connection is called “entanglement,” and it has some bizarre consequences. Imagine, for instance, two electrons that are entangled. If someone does something to one of the two photons on one side of the universe, the photon on the other side of the universe will also experience this change. You can, in effect, change one particle at one side of the universe by acting on a particle at the other. In yet another seemingly illogical discovery, it appears that particles are not even tethered to a single area but can exist in more than one place at the same time. This property has been called “superposition,” and it results in bizarre experimental results. The classic demonstration involved firing particles of light, called photons, through an experimental chamber that has two tiny slits and a target, which reacts to the impact of the photons. When a single photon is fired, it goes through both slits at once. The basic idea is that, because the photon behaves as both a particle and a wave, it “interferes” with itself, thus causing the particle to exist in two places at once. However, (remember the Schrödinger’s cat experiment) when physicists measure which slit the photon went through, the superposition disappears, and the particle appears to have gone through only one of the two slits. Again, the act of measuring “collapses” the superposition, making the particle “real.” In 2018, physicists in Japan and Israel worked to find a way to investigate this seeming paradox. What they found is that the properties of a particle are not just determined by what happens to the particle in the past, or in the present, but in the future as well, suggesting that the particle exists outside of the linear progression of time as it is understood. Further, the researchers found that simply by deciding to look for a particle in a certain location, that observation causes the particle to appear. Even stranger, physicists have now discovered that the particle in question might not just be in two places at once, but might be in many, many different places at once. It might not, in fact, exist at all without the act of observation. In other words, observation, whether in the past, present, or future, determines existence.

Robotics_2.indd 385

10/2/2020 12:35:47 PM

386 | ROBOTICS & ARTIFICIAL INTELLIGENCE By observing something, not only can one cause the thing to happen in the present but also cause other things to have happened in the past that make the thing happen in the present, or the future.5 The findings of quantum physics experiments force humans to question their notions of time, space, and causality. Is it the act of observation that makes reality real? In some ways, the strangeness of quantum theories recalls the old adage: if a tree falls in the forest and no one is there to hear, does it make a sound? In the case of quantum physics, the thought experiment might be, if a tree falls in the forest and there is no one or no thing that can detect its existence, does the tree exist at all? Reality, as it is experienced, is a matter of perspective, and one of the things that quantum physics tells us is that perspective might even dictate the fundamental existence of matter. However, in another sense, none of this matters. On a basic level, it does not matter if unobserved trees and hidden cats are “real,” because humans can only interact with what they can observe and measure and so will always interact with a “real” world, even if that world is actually made up of particles that both exist and do not exist at the same time.

Applying Quantum Properties to Computing The most important thing to know about quantum computing is that it is not coming soon. There is no reason to stand in a queue waiting for Microsoft or Apple to release their first quantum machines, and the technology is not even close to commercial development. It is possible that the entire endeavor may never amount to anything practical for the commercial market. However, key breakthroughs have meant that the potential of quantum computing is now more real than ever, and tech companies have invested in earnest, hoping that, if quantum computers are never developed for the commercial market, their investment might still result in the invention of new technologies that advance their fields. Quantum computing was first imagined in the 1960s, at the same time that the strange powers of quantum physics were first being explored. The basic idea was to devise a computer that could process information in the same way that particles interact with one another. Modern computing is based on binary processors that perform calculations by evaluating a series of signals that are either on or off, known as “ones” and “zeroes,” or “zeds,” respectively. Binary signals are the basic unit of computing, but what if computers could rely not on binary ones and zeroes but on quantum bits, or qubits, as they have been called. Qubits have powerful properties that allow them to be much more useful than binary bits. A qubit can be enticed into assuming superposition, discussed earlier, in which the qubit might exist

Robotics_2.indd 386

10/2/2020 12:35:51 PM

Quantum Computing | 387

A quantum computer based on qubits developed by IBM Zurich Lab. Photo by IBM Zurich Lab, via Wikimedia.

Robotics_2.indd 387

10/2/2020 12:35:55 PM

388 | ROBOTICS & ARTIFICIAL INTELLIGENCE in multiple states at once. There is no appropriate metaphor for explaining how this works. Some explain that the qubit will be both a one and a zero, but this is not correct. It will be a one, a zero, also not a one, but not a zero, and also not a zero but not a one, etc. The important point is that the way that a qubit behaves is determined by probability. There is a certain probability that the qubit will behave as a one, and a certain probability it will behave as a zero, and a certain probability that it will behave as neither of these things. Human interaction “collapses” the state and thus determines how the qubit behaves. When it comes to processing power, the key idea is that a single qubit can play a rule in multiple calculations at the same time. Thanks to superposition, qubits are not limited to the linear rules of calculation based on the binary approach. A computer, with a relatively small number of qubits, can thus perform many more calculations per instant than similar traditional computers. Other quantum properties also play a role in creating additional power for machines that use qubits instead of traditional bits. For instance, the property of entanglement means that qubits might be entangled with other qubits such that a machine can utilize multiple sets of bits at the same time. Therefore, a 50-qubit machine can utilize the power of two to the power of 50 qubits if each qubit is entangled with another qubit. This effectively allows the machine to have double the computing power of the qubits that the machine possesses. As mentioned earlier, the property of quantum interference can be used to collapse incorrect calculation pathways so as to direct the machine toward correct answers.6 Quantum computers exist only in the most basic, highly experimental forms, but there are also quantum algorithms, sets of instructions based on the use of qubits. For instance, there is a quantum algorithm known as Grover’s algorithm with incredible calculation speed. If this algorithm were used to find a piece of information from among a collection of data with more than 100 million possible options, it could find the correct answer with only 10,000 operations, while a classical computer would require more than 50 million operations to locate the same data. While the potential of unlocking what is, essentially, unlimited computational power might sound impressive and desirable, scientists have no idea how to stabilize qubits to make the technology practical. Qubits misfire frequently, producing incorrect answers, and any environmental disturbance can interrupt the calculations involved. Researchers are working on ways to stabilize qubits and quantum computers, but, at present, the technology is appropriate only for the laboratory.7 If the technology can be refined, there are many potential applications for quantum computing, some in the field of artificial intelligence and machine learning. By freeing artificial minds

Robotics_2.indd 388

10/2/2020 12:36:01 PM

Quantum Computing | 389 from the rote calculation capacity of binary computing, some researchers hope they may be able to approach real intelligence: the ability to think about information in unpredictable ways and to arrive at novel solutions. There is no roadmap for how to link quantum computing to machine learning yet, but a calculation revolution of this nature could unlock the power to meet or surpass the innate capabilities of the human mind in ways that traditional computing may never be able to achieve. In a related vein, it is possible that research in AI might also help researchers unlock the potential of quantum computing. In this 2019 article from Cosmos, researcher Gabriella Bernardi discusses possible overlaps between research into AI and the field of “deep learning” with the development of the world’s first quantum computers:

“AI SAYS WE CAN TELEPORT QUBITS” by Gabriella Bernardi Cosmos, June 25, 2019 Source Document

Deep learning shows its worth in the word of quantum computing.

Italian researchers have shown that it is possible to teleport a quantum bit (or qubit) in what might be called a real-world situation. And they did it by letting artificial intelligence do much of the thinking. The phenomenon of qubit transfer is not new, but this work, which was led by Enrico Prati of the Institute of Photonics and Nanotechnologies in Milan, is the first to do it in a situation where the system deviates from ideal conditions. Moreover, it is the first time that a class of machine-learning algorithms known as deep reinforcement learning has been applied to a quantum computing problem. The findings are published in a paper in the journal Communications Physics.

Robotics_2.indd 389

One of the basic problems in quantum computing is finding a fast and reliable method to move the qubit—the basic piece of quantum information—in the machine. This piece of information is coded by a single electron that has to be moved between two positions without passing through any of the space in between. In the so-called “adiabatic,” or thermodynamic, quantum computing approach, this can be achieved by applying a specific sequence of laser pulses to a chain of an odd number of quantum dots—identical sites in which the electron can be placed. It is a purely quantum process and a solution to the problem was invented by Nikolay Vitanov of the Helsinki Institute of Physics in 1999. Given its nature, rather distant from the intuition of common sense, this solution is called a “counterintuitive” sequence. However, the method applies only in ideal conditions, when the electron state suffers no

10/2/2020 12:36:05 PM

390 | ROBOTICS & ARTIFICIAL INTELLIGENCE

“AI Says We Can Teleport Qubits” continued disturbances or perturbations. Thus, Prati and colleagues Riccardo Porotti and Dario Tamaschelli of the University of Milan and Marcello Restelli of the Milan Polytechnic, took a different approach. “We decided to test the deep learning’s artificial intelligence, which has already been much talked about for having defeated the world champion at the game Go, and for more serious applications such as the recognition of breast cancer, applying it to the field of quantum computers,” Prati says. Deep learning techniques are based on artificial neural networks arranged in different layers, each of which calculates the values for the next one so that the information is processed more and more completely. Usually, a set of known answers to the problem is used to “train” the network, but when these are not known, another technique called “reinforcement learning” can be used. In this approach two neural networks are used: an “actor” has the task of finding new solutions, and

Robotics_2.indd 390

a “critic” must assess the quality of these solution. Provided a reliable way to judge the respective results can be given by the researchers, these two networks can examine the problem independently. The researchers, then, set up this artificial intelligence method, assigning it the task of discovering alone how to control the qubit. “So, we let artificial intelligence find its own solution, without giving it preconceptions or examples,” Prati says. “It found another solution that is faster than the original one, and furthermore it adapts when there are disturbances.” In other words, he adds, artificial intelligence “has understood the phenomenon and generalised the result better than us.” “It is as if artificial intelligence was able to discover by itself how to teleport qubits regardless of the disturbance in place, even in cases where we do not already have any solution,” he explains. “With this work we have shown that the design and control of quantum computers can benefit from the using of artificial intelligence.”8

10/2/2020 12:36:09 PM

Quantum Computing | 391 Beyond the applications to computing, harnessing the power of quantum principles may be transformative in other arenas as well. Imagine, for instance, the use of entanglement or superposition to make teleportation possible. In 1998, a team of researchers produced a paper arguing that just such a future was possible. The implications of quantum mechanics are still poorly understood, because much of what goes on at the most basic levels of matter and energy are not understood, but the world is coming closer to developing a realistic and potentially universe-changing view of these strange properties and principles. Though it is unclear whether or not quantum computing will become a revolution in practical and consumer technology, it is quite possible that research in the field will lead to peripheral discoveries that might change the consumer technology market far sooner. Most research into robotics and artificial intelligence has followed a path of discovery that can be traced back to the discovery of discrete particles and their applications via the binary system of energy transmission. Quantum research arises from a different lineage of discovery and investigation. Interestingly, the term “quanta,” was coined by Max Planck and Albert Einstein to discuss a measurable packet of light, not a single photon or light particle, but a bunch of those. In 1924, the term “quantum jump” was coined, which later changed in the 1930s, to “quantum leap,” and both expressions were used to describe an abrupt change from one state to another. This differentiated a quantum leap from an incremental development. The development of quantum computing and other quantum technology would literally represent a quantum leap for humankind. When or whether these technologies will begin to influence day-to-day life is another story.

Robotics_2.indd 391

10/2/2020 12:36:14 PM

392 | ROBOTICS & ARTIFICIAL INTELLIGENCE

CONCLUSION Quantum computing is the most fantastic computer revolution that has not happened yet. The power of qubits and quantum computers promises an advancement in computational power that would make even a small quantum computer more powerful than the supercomputers of the modern age. There is a possibility, too, that quantum computers could be integrated with standard computer technology, through cloud computing, to enhance the power and speed of consumer and commercial computer technology. The idea of utilizing the properties of subatomic particles in a technology so directly applicable to human life also raises the question of how human understanding of physics and the implications of quantum research may change. It is possible that the experiment of introducing Americans and others around the world to concepts involved in quantum mechanics and physics may result in major shifts in perceptions not only of computers, but of human life and the nature of the universe.

DISCUSSION QUESTIONS • Should researchers continue working on creating quantum computers? Why or why not? • How might quantum computer research advance technology even if quantum computing never becomes a consumer technology? • Do quantum phenomena change human understanding of reality? Why or why not? • How could quantum computing change the field of AI research? Works Used Ball, Philip. “Quantum Physics May Be Even Spookier Than You Think.” Scientific American. 21 May 2018, www.scientificamerican.com/article/quantum-physics-may-be-even-spookierthan-you-think/. Bernardi, Gabriella. “AI Says We Can Teleport Qubits.” Cosmos. 25 June 2019, cosmosmagazine.

Robotics_2.indd 392

10/2/2020 12:36:18 PM

Quantum Computing | 393 com/technology/thanks-to-ai-we-know-we-can-teleport-qubits-in-the-real-world/. Biever, Celeste. “Celebrations as Higgs Boson Is Finally Discovered.” NewScientist. 4 July 2012, www.newscientist.com/article/dn22014-celebrations-as-higgs-boson-is-finally-discovered/. Cartwright, Jon. “Collapse: Has Quantum Theory’s Greatest Mystery Been Solved?” NewScientist. 13 July 2016, www.newscientist.com/article/mg23130820-200-collapse-has-quantum-theorysgreatest-mystery-been-solved/. Mathas, Carolyn. “The Basics of Quantum Computing—A Tutorial.” EDN. 13 Aug. 2019, www. edn.com/the-basics-of-quantum-computing-a-tutorial/. Perlman, Merrill. “The History of Using ‘Quantum’ to Mean ‘Really Big’.” Columbia Journalism Review. 4 Aug. 2014, archives.cjr.org/language_corner/language_corner_080514.php. Simonite, Tom. “The WIRED Guide to Quantum Computing.” Wired. Condé Nast. 24 Aug. 2018, www.wired.com/story/wired-guide-to-quantum-computing/. Webb, Richard. “Quantum Physics.” NewScientist. 2020, www.newscientist.com/term/quantumphysics/. Williams, Matt. “What Are the Parts of an Atom?” Universe Today. 15 Dec. 2015, www. universetoday.com/82128/parts-of-an-atom/.

Robotics_2.indd 393

10/2/2020 12:36:22 PM

Robotics_2.indd 394

10/2/2020 12:36:26 PM

THE SINGULARITY The Transcendence of Technology (2020)

27

Introduction The singularity is a philosophical concept regarding the fusion of human life and mechanical engineering reflecting that humanity may, through technological means, surpass the influence of evolution and natural selection. In essence, such a moment will represent the end of the era of human evolution. From this point forward, the evolution of life on earth will be determined by technological development, and the evolved capabilities of humanity will be surpassed by either the combination of human and machine or by the invention of machines with the capability to think like or better than humans. AI science may surpass human capabilities, but there are significant questions that remain about how this transformation will be handled in popular culture. Topics Covered in this Chapter Include: • Philosophy of artificial intelligence • Ray Kurzweil • Technological adjustment • Popular perception of robots

This Chapter Discusses the Following Source Document: Schweisfurth, Tim, and René Chester Goduscheit, “From the Pyramids to Apollo 11—Can AI Ever Rival Human Creativity?” The Conversation, February 5, 2020

Robotics_2.indd 395

10/5/2020 11:14:25 AM

396 | ROBOTICS & ARTIFICIAL INTELLIGENCE

T

hose who spend any time reading about artificial intelligence will likely encounter references to “the Singularity.” Treated with an almost mystical reverence by futurists who have perhaps gone a little overboard on AI theory, the singularity refers to the end of the “age of humanity” and the beginning of the “age of machines.” This alleged transformation is often depicted as beginning with some future technological discovery that will fundamentally alter the human relationship with technology. It is often theorized, and directly stated in literature on the subject, that the transformation will be radical and that humanity’s passage through this time will render future human culture virtually incomprehensible to those who lived before it. Such an imagined paradigm shift may be impossible to achieve given the degree to which human life is now documented. The singularity is, at very most, a thought experiment about the role of nature and evolution in human life and a reflection on the fact that humanity has gradually begun to take control of its own evolution and will eventually use technology to enhance the capabilities of the species beyond the natural forces that have shaped humanity.

On the Vinge Fringe Vernor Vinge is the theorist most directly responsible for introducing the idea of the Singularity into popular discourse. This came largely through a 1993 article published in the Whole Earth Review, “Technological Singularity.” In this now much adored essay, Vinge defined the concept of what he dubbed the Singularity: The acceleration of technological progress has been the central feature of this century. We are on the edge of change comparable to the rise of human life on Earth. The precise cause of this change is the imminent creation by technology of entities with greater-than-human intelligence. Science may achieve this breakthrough by several means (and this is another reason for having confidence that the event will occur): Computers that are ‘awake’ and superhumanly intelligent may be developed. (To date, there has been much controversy as to whether we can create human equivalence in a machine. But if the answer is ‘yes,’ then there is little doubt that more intelligent beings can be constructed shortly thereafter). Large computer networks and their associated users may ‘wake up’ as superhumanly intelligent entities. Computer/human interfaces may become so intimate that users may reasonably be considered superhaumanly intelligent.

Robotics_2.indd 396

10/2/2020 12:36:33 PM

The Singularity | 397 Biological science may provide means to improve natural human intellect. Vinge goes on to say, . . . . This change will be a throwing-away of all the human rules, perhaps in the blink of an eye—an exponential runaway beyond any hope of control. Developments that were thought might only happen in ‘a million years’ (if ever) will likely happen in the next century. It’s fair to call this event a singularity (‘the Singularity’ for the purposes of this piece). It is a point where our old models must be discarded and a new reality rules, a point that will loom vaster and vaster over human affairs until the notion becomes a commonplace. Yet when it finally happens, it may still be a great surprise and a greater unknown.1 In the language of mathematics and physics, a singularity is a point at which a function takes an infinite value. Physicists have theorized that this is the physical state that exists at the center of a black hole, a point of infinite energy and density. In the language of science fiction, a singularity, or a black hole, or the probably unrealistic scifi inventions known as “worm holes,” are also depicted as gateways. The universe, as known, collapses to an infinite point and through that, a new universe is created, or at least may be glimpsed by any consciousness able to somehow travel through this point of infinite density. A singularity can, therefore, be pictured as an end or a beginning, or, more likely, both. The universe, some physicists believe, began with a singularity, with all the multitudinous materials that now compose the universe condensed Singularity theorist Vernor Vinge, via Wikimedia. into a point of infinite density and power before the “Big Bang” created the universe, creating both space and the things that fill it at once.

Robotics_2.indd 397

10/2/2020 12:36:37 PM

398 | ROBOTICS & ARTIFICIAL INTELLIGENCE In the language of AI philosophers like Vinge, the singularity is both an end and a beginning. The idea is that humanity’s growing technological expertise will eventually fuel a transformation so complete that humanity itself, as it currently exists, will cease to be or will no longer matter. The universe and all its riches (and problems) will then belong to this theoretically advanced fusion of humanity and machine, or technologically altered humanity, or intelligent machines, who will go on to inherit all of reality and whose fate will become the main thread in the story of existence. As Vinge wrote in 1993: “Within thirty years, we will have the technological means to create superhuman intelligence. Shortly after, the human era will be ended.”

The Post-Human Era Vinge’s theories on the Singularity fueled the emergence of a new philosophical field, often called “futurism.” The primary driver of this emerging philosophy was not Vinge but author and inventor Ray Kurzweil, whose books on humanity’s future and the future of human-created machines fueled interest in this subfield of philosophy. Kurzweil’s The Singularity Is Near builds on Vinge’s ideas, predicting that, by 2045, humanity will have created a super-human artificial intelligence. In Kurzweil’s mind, such an intelligence will think thoughts that no human has ever thought before or would be able to think. This is an important idea and deserves reflection. The human mind is limited by its capacity. Human minds are created through the slow process of evolution by natural selection, and so the capacity of the human mind is determined first by the kinds of minds that succeeded in past generations (success being the ability to survive and pass on one’s genes) and by the accumulation of random mutations introduced through the blending of genes during sexual reproduction. The combination of genes through reproduction for instance creates never-before-existent kinds of minds, and this process can, therefore, create minds with new and different ways of thinking. Put simply, because of the unpredictable combination of genes, a baby might be born with a unique mind that could be more “powerful” than the minds of either parent. Likewise, genetic changes that emerge from a variety of other sources can potentially result in an individual with different or new kinds of neurological pathways and connections. This, in turn, can result in unique minds that may exceed many other minds in capacity.

Robotics_2.indd 398

10/2/2020 12:36:42 PM

The Singularity | 399 There is another important dimension to the evolution of human minds. Humanity learned to record collective knowledge, which means that each subsequent generation can have access to knowledge that may have originally been gained many generations prior. This impacts human evolution because the sharing and distribution of knowledge from past generations can influence which humans “succeed” by passing on their genes, and so the capabilities of the human mind reflect a complex interplay of factors that include sexual reproduction, the slow accumulation of genetic innovation, and the transmission of knowledge between generations. In terms of minds, the capability to innovate means the ability to create new ideas. What Kurzweil and other futurists have said is that the creation of true artificial intelligence will be an evolutionary leap far beyond what would occur through the biological innovation of sexual reproduction and genetic mutation. It is theoretically possible that such an intelligence may have mental capacity beyond the capabilities of the most evolutionarily advanced human minds. Further, technological minds will think in different ways, because the resulting minds will be purpose-built rather than the result of random biological processes. Unlike a human brain, a mechanical brain might have instant, precise access to a much larger pool of data and be able to imagine more sophisticated ideas or strategies than any human mind. What Kurzweil envisions is that the minds created by humans will make the human mind obsolete.2 As Vinge noted, the creation of this new mind might come into existence in a variety of ways. It might be possible for humans to use technology to improve biological minds. Many science fiction books and films have explored this concept, and there are individuals working on technologies in this vein. The improvement of biological brains might involve the incorporation of artificial elements such as implanting computerized components into a human brain for expanded memory capacity or the capability to perform instant accurate calculations or evaluations of data. Mechanically enhanced minds might even have different sensory capabilities. This is a difficult thing to imagine, but one might begin to glimpse the possibilities by considering memory. Humans have limited memory capacity, and incomplete, distorted, and imperfect memories lead to many failures of intellect and reason. A computer-enhanced mind might ensure that a person’s recall was always accurate. In such a future, knowledge might not need to be gained through repetitive memorization and other educational methods but might, instead, be uploaded into the brain. A world in which all humans were similarly equipped would be a world without the need for school, in the traditional sense.

Robotics_2.indd 399

10/2/2020 12:36:46 PM

400 | ROBOTICS & ARTIFICIAL INTELLIGENCE

Evolution of the Singularity, by Myworkforwiki, via Wikimedia.

Alternatively, advanced human minds might be created through using chemicals, hormones, or other biological manipulation techniques to improve capacity. Such a brain would be created by technology, though not with mechanical components. Like a computer-enhanced mind, a biologically enhanced mind might have different or greater capacity. Humans endowed with such a mind might have different ideas about humanity’s future and role in the universe, ideas that were impossible until the biological enhancement of the human mind was introduced. The other avenue to the Singularity is the creation of non-human intelligence. Artificial intelligence may take many forms. It might manifest in the form of a robot being equipped with

Robotics_2.indd 400

10/2/2020 12:36:50 PM

The Singularity | 401 an artificial mind, or it may appear in the form of an intelligence without a physical form or body, at least in the traditional sense. An artificial intelligence might also be biological. It is possible that the first artificial intelligence created could be made through bioengineering rather than computer engineering, with biologists able to create artificial, but still biological, brains with new and different capabilities. There is a significant theoretical and functional difference between reaching the Singularity through enhancing human bodies versus reaching it via the creation of a non-human intelligence. The difference hinges on biological concepts involving the nature of species. A species, in biological terms, is a discrete type of entity that can be distinguished from other species by an inability to interbreed. A horse and a zebra are different species not because they “look different” but because it is difficult for them to interbreed. Children can be created, but these babies are typically sterile and cannot start a population of their own. In evolutionary biology, the factors that make it difficult for zebras and horses to interbreed are called “isolating factors,” and it is these factors that create a species. Once a population has accrued sufficient isolating factors, that population constitutes a new species, distinct from the parent species because it can no longer interbreed. Augmenting humanity would create continuity such that the resulting beings will still be human, though enhanced. In the language of biology, this would constitute a new human “subspecies,” which is a population with demonstrable differences from the parent species but one still sexually compatible and a member of that broader species. By contrast, the creation of non-human intelligence would be more akin to the creation of an entirely new species that would, theoretically, not be able to interbreed with humans (though it is also possible that the new species would be designed with the capacity to interbreed as well). The creation of a new species unable to mix with the parent species (humanity) is the other main path that could lead to the Singularity. Kurzweil, Vinge, and others who embrace “transhumanism”—the idea that humanity will evolve beyond biological limitations—expressed cautiously optimistic views of the Singularity. However, some do not view the Singularity as so benign. Many have expressed concern that the creation of artificial intelligence could expand beyond human control and that AI will be able to create new, better versions of itself with even greater levels of intelligence. This has been called the “intelligence” explosion, and some believe it could be disastrous for humanity.

Robotics_2.indd 401

10/2/2020 12:36:54 PM

402 | ROBOTICS & ARTIFICIAL INTELLIGENCE A number of influential scientists, among them Steven Hawking, have formally expressed skepticism and fear at the idea of the Singularity. They envision a scenario where artificial intelligence oppresses or even destroys humanity. In this future, humans are no more significant than a mouse or insect to the more advanced artificial intelligences. Some have called for scientists working on AI to take caution, therefore, to ensure that the advancement of intelligence does not exceed human capacity for control.3

Between Here and There One important thing to note when considering the Singularity is that humanity is not now, in 2020, close to this reality, despite the warnings of techno-futurists. Intelligence is still poorly understood, and the most advanced computational systems available, utilizing the collective capabilities of all of humanity stretching back centuries, are intelligent in name only. This is because the human mind is not a computer. Though the most reductive, scientifically minded individuals might describe the human mind as a biological computer, the mind is not simply a computational tool but an emotional one. Much of what humanity values about itself goes beyond intellect into the realm of emotion and intuition. The capacity to love, to imagine, to dream, and to innovate fuel human culture as much as the ability to analyze, to calculate, and to reduce the physical world into the language of scientific discourse. It is arguable that the creation of true intelligence cannot be achieved without endowing an artificial mind with the capacity to feel as well as to think, because it is the combination of feeling and thinking that constitutes consciousness or sentience. Consciousness is not a calculation, because it involves subjective, emotional reflection on one’s condition, potential future and past. As the scientific and practical understanding of human psychology has progressed, it has become clear that human intelligence is emotional as well as intellectual. Further, the capacity to feel appears to be every bit as primal in the history of life than the capacity to intellectualize. Science fiction depictions of beings without emotion, like the Vulcans from Star Trek, or other dispassionate intelligent machines from science fiction, are still at this point imaginary. It is unclear if such a mind can even exist. Consider this question: without emotion, why would anyone do anything? The desire to do things—to explore, to experience, to discover—is driven by emotion as much as by intellect. Creation and innovation also appear to depend on emotion. An intellect without emotion would, quite simply, have no reason at all to do anything unless it was programmed to. It would have the capacity to think, but no reason to do the thinking.

Robotics_2.indd 402

10/2/2020 12:36:58 PM

The Singularity | 403 An important step, then, that must be conquered before humanity approaches the Singularity is the creation of artificial minds that have the capacity to imagine and innovate. Some believe that steps toward this future are being taken today. For instance, in this article from The Conversation, tech experts Tim Schweisfurth and René Chester Goduscheit discuss the frontier of AI creativity:

“FROM THE PYRAMIDS TO APOLLO 11— CAN AI EVER RIVAL HUMAN CREATIVITY?” by Tim Schweisfurth and René Chester Goduscheit The Conversation, February 5, 2020 Source Document

The European Patent Office recently turned down an application for a patent that described a food container. This was not because the invention was not novel or useful, but because it was created by artificial intelligence (AI). By law, inventors need to be actual people. This isn’t the first invention by AI— machines have produced innovations ranging from scientific papers and books to new materials and music.

is supposed to create a new picture of a person. Generative adversarial networks tackle this creation task by combining two sub-tasks.

That said, being creative is clearly one of the most remarkable human traits. Without it, there would be no poetry, no internet and no space travel. But could AI ever match or even surpass us? Let’s have a look at the research.

How does the discriminator know what a human looks like? Well, you feed it many examples of pictures of real person before you start the task. Based on the feedback of the discriminator, the generator improves its algorithm and suggests a new picture. This process goes on and on until the discriminator decides that the pictures look close enough to the picture examples it has learned. These generated pictures come extremely close to real people.

From a theoretical perspective, creativity and innovation is a process of search and combination. We start from one piece of knowledge and connect it with another piece of knowledge into something that is new and useful. In principle, this is also something that can be done by machines—in fact, they excel at storing, processing and making connections within data. Machines come up with innovations by using generative methods. But how does this work exactly? There are different approaches, but the state of the art is called generative adversarial networks. As an example, consider a machine that

Robotics_2.indd 403

The first part is the generator, which produces new images starting from a random distribution of pixels. The second part is the discriminator, which tells the generator how close it came to actually producing a real looking picture.

But even if machines can create innovations from data, this does not mean that they are likely to steal all the spark of human creativity any time soon. Innovation is a problem-solving process—for innovation to happen, problems are combined with solutions. Humans can go either direction—they start with a problem and solve it, or they take a solution and try to find new problems for it.

10/2/2020 12:37:02 PM

404 | ROBOTICS & ARTIFICIAL INTELLIGENCE

“From the Pyramids to Apollo 11— Can AI Ever Rival Human Creativity?” continued An example for the latter type of innovation is the Post-it note. An engineer developed an adhesive that was much too weak and was sitting on his desk. Only later a colleague realised that this solution could help prevent his notes falling out of his scores during choir practice. Using data as an input and code as explicit problem formulation, machines can also provide solutions to problems. Problem finding, however, is hard for machines, as problems are often out of the boundaries of the data pool that machines innovate upon. What’s more, innovation is often based on needs we didn’t even know we had. Think of the Walkman. Even if no consumer ever uttered the wish to listen to music while walking, this innovation was a huge success. As such latent needs are hard to formulate and make explicit, they are also unlikely to find their way into the data pool that machines need for innovation. Humans and machines also have different raw material that they use as input for innovation. Where humans draw on a lifetime of broad experiences to create ideas from, machines are largely restricted to the data we feed them. Machines can quickly generate countless incremental innovations in forms of new versions based on the input data. Breakthrough innovation, however, is unlikely to come out of machines as it is often based on connecting fields that are distant or unconnected to each other. Think of the invention of the snowboard, which connects the worlds of skiing and surfing. Also, creativity isn’t just about novelty, it is also about usefulness. While machines are clearly

Robotics_2.indd 404

able to create something that is incrementally new, this does not mean that these creations are useful. Usefulness is defined in the eye of those potentially using innovations and is hard to judge for machines. Humans, however, can empathise with other humans and understand their needs better. Finally, creative ideas generated by AI may be less preferred by consumers simply because they have been created by a machine. Humans might discount ideas from AI since they feel these ideas are less authentic or even threatening. Or they might simply prefer ideas of their kind, an effect that has been observed in other fields before. As of now, many aspects of creativity remain uncontested terrain for machines and AI. However, there are disclaimers. Even if machines cannot replace humans in the creative domain, they are great help to complement human creativity. For example, we can ask new questions or identify new problems that we solve in combination with machine learning. In addition, our analysis is based on the fact that machines mostly innovate on narrow datasets. AI could become much more creative if it could combine big, rich and otherwise disconnected data. Also, machines may get better at creativity when they get better at the kind of broad intelligence humans possess—something we call “general intelligence.” And this might not be too far in the future—some experts assess that there is a 50% chance that machines reach human-level intelligence within the next 50 years.4

10/2/2020 12:37:07 PM

The Singularity | 405 As experts in the field of technology and innovation, Schweisfurth and Goduscheit are more willing to embrace the idea that true creativity may be a function of intellectual power, but this is not necessarily the case. While an advanced mind might be able to create new art, why would this matter? The intelligence, unless endowed with emotion, would not care about that art. The art would have no emotional resonance for that intelligence. The AI artist would not have any emotional commitment to the creation of the art, nor would it be able to derive any emotional experience from considering it. Is AI really art, or simply an imitation of art much in the same way that chatbots like Siri can imitate conversation but not actually engage in one? The important factor, the thing that gives art value, is the emotional reactions that art evokes and the sense of emotional connectivity between artist and viewer. It is arguable that AI art is not really art but only an imitation of art constructed to evoke emotions in viewers.

An Illusion of Scope? The Singularity is a big idea that draws in many other large-scale ideas and considerations. Contemplating the Singularity asks humanity to consider what constitutes intelligence, and what makes us human. It is a theoretical frontier that requires reflection on the nature of life and the universe itself. However, the idea of the Singularity is also, in many ways, illusory. One of the metaphors consistently used in discussions of the Singularity is to compare humanity as it exists with ancient humans of the Paleolithic era. This thought experiment asks one to consider what a “caveman” or “cavewoman” might think about the modern world if magically transported into it. Of course, they would have little understanding of modern technology and much of what modern humans do would seem magical to them, but this thought experiment perpetuates inaccurate thinking about the way that evolution works. Modern humans, the species Homo sapiens, have only been on the earth for around 200,000 years, and human technology stretches back to the first human ancestors approximately 6 to 7 million years ago.5 Obviously, human memory does not reach back 200,000 years, and it was much more recently that humanity invented the capacity to record history so that we could revisit what life was like in generations past. But the idea that evolution “leaped” from cavemen to modern humans is not accurate. Evolution does not leap, it proceeds in small steps, and life adapts as these changes occur. Cave people slowly gave rise to other kinds of protohumans, who gave rise to other kinds, and so on. There is no need to conceptualize a massive leap forward because it did not happen. When thinking about the Singularity, one might therefore

Robotics_2.indd 405

10/2/2020 12:37:12 PM

406 | ROBOTICS & ARTIFICIAL INTELLIGENCE imagine a leap that is unlikely to occur. The development of AI is part of a chain of small discoveries and accomplishments. When and if humanity creates a new species or a new version of humanity, this species will be part of that continuity, not a leap but part of an ancient lineage. Artificial intelligence is a product of human collective intelligence and culture, representing the collective effort and imagination of humans stretching back to the Stone Age. If this intelligence is hostile toward humanity, it will be because humanity has failed to create a progeny species that recognizes or is emotionally capable of appreciating its existence and history. Like many other dangers of the robotic and AI age, this is a problem that humans will create for themselves, not a reflection of some existential conflict between humanity and machines. The Singularity may represent not the end of humanity but a new frontier of human existence that is a continuation of humanity’s inexorable march towards increasing complexity. The form that this transformation will take and the degree to which it may or may not be “disruptive” to human culture will, therefore, be the product of human management and human failings.

CONCLUSION The Singularity need not be frightening, but there is a distinct possibility that humanity will mismanage this stage in human evolution as it has mismanaged others. The industrial revolution resulted in widespread suffering because humans mismanaged the introduction of industrial culture so as to reap the benefits themselves, and with little concern for how those changes impacted the lives of others. In this same vein, it is likely that the introduction of technology will continue to be managed by corporations and individuals who prioritize their own goals over the welfare of the broader human race. If this is the case, the invention of artificial intelligence might be catastrophic for much of humanity. Technology is morally and ethically neutral. The way that it is developed and used by humans will determine how technology will impact the human species.

Robotics_2.indd 406

10/5/2020 11:50:14 AM

The Singularity | 407

DISCUSSION QUESTIONS • Is art created without emotion still art? Why or why not? • What factors would determine whether an augmented intelligence is a subspecies or humanity or an entirely new species? Use examples from the text. • Is it likely that humanity will destroy itself through AI? Why or why not? • Is emotion an important part of intelligence? Why or why not? • Is it possible to have intelligence without sentience? Why or why not? Works Used Du Sautoy, Marcus. “True AI Creativity Is Coming and Will Reveal the Minds of Machines.” NewScientist. 8 May 2019, www.newscientist.com/article/mg24232292-000-true-ai-creativityis-coming-and-will-reveal-the-minds-of-machines/. King, Barbara J. “For How Long Have We Been Human?” NPR. 13 Sept 2012, www.npr.org/ sections/13.7/2012/09/11/160934187/for-how-long-have-we-been-human. Kurzweil, Ray. The Singularity Is Near. Penguin Books, 2005. Schweisfurth, Tim, and René Chester Goduscheit. “From the Pyramids to Apollo 11—Can AI Ever Rival Human Creativity?” The Conversation. 5 Feb. 2020, theconversation.com/fromthe-pyramids-to-apollo-11-can-ai-ever-rival-human-creativity-131163. Tzezana, Roey. “Singularity: Explain It to Me Like I’m 5-Years-Old.” Futurism. 3 Mar. 2017, futurism.com/singularity-explain-it-to-me-like-im-5-years-old. Vinge, Vernor. “Technological Singularity.” Whole Earth Review. 1993. frc.ri.cmu.edu/~hpm/ book98/com.ch1/vinge.singularity.html.

Robotics_2.indd 407

10/5/2020 11:11:03 AM

Robotics_2.indd 408

10/2/2020 12:37:24 PM

CONCLUSION Robotics and AI: Oppressors or a New Utopia?

28

Though the integration of robots and other kinds of technology has come at a great cost, both human and economic, the promise that these technologies hold cannot be ignored. Through robotics and AI, humanity has the potential power to solve long-standing social and technological problems, to explore new vistas and new worlds, and to improve the lives of many millions around the world. Further, the development of robotics and AI is not just an end, but is also an exploration in and of itself. Learning to design and build machines with ever greater capabilities is part of a process that allows scientists to learn more about life itself and the function and potential of the human species. Robotics and AI are avenues to as yet unknown futures that may in fact prove more utopian than any versions of human culture so far created. The problem for many when trying to develop opinions about AI and robotics is that the future is unclear, and depends to a large degree on how humanity manages its current challenges and inequities. There are some who believe that AI and robots will become humanity’s oppressors or replacements, creating a new world dominated entirely by machines in which humanity is no longer relevant. While possible, this perspective is informed by the emotional perception of humanity’s struggle against the machines, which is an illusory way of depicting struggles that are more realistically described as class, race, and economic conflicts between humans, in which robots and AI are weapons wielded by the dominant class against the marginalized classes. Other futurists believe that robotics and AI might lead to a new utopia in which humanity realizes its greatest aspirations freed from the drudgery of mindless or unnecessary toil, and in which humans therefore direct their attention to more expansive vistas of exploration and endeavor. It might be that either of these futures is possible, depending on how humanity manages the integration of technology into human life and culture. Obviously, most Americans would likely prefer a future in which technology improves their lives to a future in which automation makes their lives more difficult, but it is unclear what the right steps are to reach these goals. The only way to ensure that the further integration of technology is more egalitarian is to use political and social systems to assure that the world is more egalitarian. Those fearful of their robotic future would

Robotics_2.indd 409

10/2/2020 12:37:27 PM

410 | ROBOTICS & ARTIFICIAL INTELLIGENCE therefore be advised to organize, unionize, and to support politicians who express real strategies for helping citizens adjust to the technological future. Likewise, combating income and wealth inequality is crucial to protecting vulnerable facets of society from technological disruption. Achieving income equality will curtail the power of corporations and institute policies that require them to consider worker’s rights and welfare in all policies. The unfurling of technology in the twenty-first century brings the inequities of technology into sharp relief. In America, and around the world, the benefits of new technologies are enjoyed first and foremost by the wealthy, while the poor suffer from the difficult changes that technology brings without experiencing many of the benefits, including new technological ways of doing business, managing finance, accessing information, or connecting to others. Indeed, American sociologists have long warned about a growing “technology gap,” in which the poorest within American society cannot access the technology needed to keep pace with the modern world, much less to advance by utilizing technological benefits. Children raised in poor urban or rural schools fall behind their peers in learning to use technology, and working-class families cannot access the latest technological breakthroughs in medicine or other arenas of daily life. Unless the inequities of the economic system are addressed, it is likely that the future of robotics, however many wonderful new benefits it will bring to society, will create a destructive disruption, negatively impacting the most marginalized members of society. Works Used Da Costa, Pedro Nicolaci. “A Technology Gap between the Rich and Poor Is Deepening US Inequality.” Business Insider. 4 May 4 2017. www.businessinsider.com/technology-gapdeepening-us-inequality-2017-5.

Robotics_2.indd 410

10/2/2020 12:37:31 PM

NOTES Introduction 1. “What Does It Mean To Be Human?” Smithsonian Institution. 2. Hamilton, “Myth Busting: The Truth about Animals and Tools.” 3. Nikiforuk, “The Big Shift Last Time: From Horse Dung to Car Smog.” 4. Nova and Schoen, “Automation Threatening 25% of Jobs in the US, Especially the ‘Boring and Repetitive’ Ones: Brookings Study.”

Chapter 1 1. Dirks, The Abrahamic Faiths. 2. Coyne, “Yes, There Is a War Between Science and Religion.” 3. Levine, “Diversity Confirmed to Boost Innovation and Financial Results.” 4. Homer, The Odyssey. 5. Beard, How Do We Look. 6. “Callistratus,” Theoi Project. 7. Ivanhoe and Van Norden, Readings in Classical Chinese Philosophy, p. 379. 8. Needham and Ronan, The Shorter Science and Civilization in China, p. 92. 9. Rosheim, Leonardo’s Lost Robots. 10. Sone, Japanese Robot Culture. 11. Homer, The Iliad.

Chapter 2 1. Nichols, The Gentleman’s Magazine. 2. Barlow, The History and Principles of Weaving by Hand and by Power. 3. Laskow, “A Machine That Made Stockings Helped Kick Off the Industrial Revolution.” 4. Binfield, Writings of the Luddites. 5. Stearns, The Industrial Revolution: A World History. 6. Essinger, Jacquard’s Web. 7. Fava-Verde, Silk and Innovation: The Jacquard Loom in the Age of the Industrial Revolution. 8. “Babbage’s Analytical Engine, 1834–1871,” Science Museum Group. 9. Swade, The Difference Engine. 10. “Babbage Machine,” New Ulm Review.

Robotics_2.indd 411

10/2/2020 12:37:33 PM

412 | Notes

Chapter 3 1. Cartwright, “Prometheus.” 2. “Prometheus,” Theo Project. 3. “Mary Wollstonecraft,” Stanford University. 4. “William Godwin,” Stanford University. 5. “Mary Wollstonecraft Shelley,” Poetry Foundation. 6. Shelley, Frankenstein, or, The Modern Prometheus. 7. “Karel Capek,” History-Computer. 8. apek, R.U.R. (Rossum’s Universal Robots). 9. Jordan, “The Czech Play That Gave Us the World ‘Robot.’”

Chapter 4 1. “Isaac Asimov (1920–1992),” Jewish Virtual Library. 2. Asimov, “Runaround.” 3. Asimov, “Reason.” 4. “Do We Need Asimov’s Laws?” MIT Technology Review. 5. Anderson, “After 75 Years, Isaac Asimov’s Three Laws of Robotics Need Updating.” 6. Hodges, Alan Turing: The Enigma. 7. Turing, “Computing Machinery and Intelligence.” 8. “The Turing Test,” Stanford Encyclopedia of Philosophy.

Chapter 5 1. “Colossus,” Crypto Museum. 2. Byford, “Colossus: How the First Programmable Electronic Computer Saved Countless Lives.” 3. “Colossus—the Computer Built to Break Hitler’s Codes,” The Telegraph. 4. “Calculator Is Fast as Light,” Laurel Outlook. 5. “How High Is Up?” Evening Star. 6. Levy, “The Brief History of the ENIAC Computer.” 7. Church, “Professor Emeritus Jay Forrester, Digital Computing and System Dynamics Pioneer, Dies at 98.” 8. “Whirlwind Computer,” History-Computer. 9. Beyer, Grace Hopper and the Invention of the Information Age. 10. Bellis, “The History of the UNIVAC Computer.”

Robotics_2.indd 412

10/2/2020 12:37:33 PM

Notes | 413

Chapter 6 1. Uttal, Neural Theories of Mind. 2. McCulloch and Pitts, “A Logical Calculus of the Ideas Immanent in Nervous Activity.” 3. Lincoln, “The Gettysburg Address.” 4. Piccinini, “The First Computational Theory of Mind and Brain.” 5. “Animals Think, Therefore . . .” The Economist. 6. Holland, “The First Biologically Inspired Robots.” 7. Walter, “An Imitation of Life.” 8. O’Connell, “What the Tortoise Taught Us.” 9. White, “Heuristic Programming.” 10. Stefferud, “The Logic Theory Machine: A Model Heuristic Program.” 11. “Logic Theorist,” History-Computer. 12. Boden, Mind as Machine: A History of Cognitive Science, p. 41.

Chapter 7 1. Goss, “Henry Ford and the Auto Assembly Line.” 2. Meyer, Manhood on the Line, p. 16. 3. Gehring, Film Clowns of the Depression, p. 140. 4. Catalan, “Metropolis at 90: You’ll Never See a Movie Like This Again.” 5. Rosen, “Unimate: The Story of George Devol and the First Robotic Arm.” 6. King, “Remaining Awake Through a Great Revolution.” 7. “The Triple Revolution,” Marxists.

Chapter 8 1. Price, et al., “Vervets Revisited: A Quantitative Analysis of Alarm Call Structure and Context Specificity.” 2. Seyfarth, Cheney, and Marler, “Vervet Monkey Alarm Calls: Semantic Communication in a Free-Ranging Primate.” 3. “How Language Developed” Comprehension Learning Precedes Vocal Production,” Science Daily. 4. McWhorter, “Why ‘Mom’ and ‘Dad’ Sound So Similar in So Many Languages.” 5. Valencia, “The Lisp Approach to AI (Part 1).” 6. “SHRDLU,” Stanford University. 7. Saygin, Cicekli, and Akman, “Turing Test: 50 Years Later.” 8. “PARRY Encounters the Doctor,” IETF.

Robotics_2.indd 413

10/2/2020 12:37:33 PM

414 | Notes 9. “The ELIZA Effect,” 99 Percent Invisible.

Chapter 9 1. Vinciguerra, “Robby the Robot: From ‘Forbidden Planet’ to Auction Block.” 2. Dvorsky, “The Most Unfortunate Design Flaws in the Human Body.” 3. Rowe, “The Top 10 Design Flaws in the Human Body.” 4. Weisberger, “The ‘Perfect’ Human Body Is Not What You Think.” 5. Cortes, “‘Star Wars’ Droids and the Science Behind Why They’re So Adorable.” 6. Kehe, “How FX Wizards Brought Interstellar’s Strange Bots to Life.”

Chapter 10 1. “Survey X: Artificial Intelligence and the Future of Humans,” Elon University. 2. Hutchins, Early Years in Machine Translation. 3. Anyoha, “The History of Artificial Intelligence.” 4. Loiseau, “Rosenblatt’s Perceptron, the First Modern Neural Network.” 5. “New Navy Device Learns by Doing,” New York Times. 6. “Lighthill Report: Artificial Intelligence,” Chilton Computing. 7. Schuchmann, “History of the Second AI Winter.” 8. Schuchmann, “Probability of an Approaching AI Winter.”

Chapter 11 1. “U.S. School Spending Per Pupil Increases for Fifth Consecutive Year, U.S. Census Bureau Reports,” Census Bureau. 2. “Road Traffic Injuries,” WHO. 3. “What Is Distracted Driving?” NHTSA. 4. Taylor, “The 1939 New York World’s Fair.” 5. Kalan, “The Original Futurama: The Legacy of the 1939 World’s Fair.” 6. Nguyen, “History of Self-Driving Cars.” 7. Earnest, “Stanford Cart.” 8. Schaub, Robust Perception from Optical Sensors for Reactive Behaviors in Autonomous Robotic Vehicles, pp. 16–18. 9. Torchinsky, Robot, Take the Wheel: The Road to Autonomous Cars and the Lost Art of Driving. 10. “Drone Warfare,” The Bureau of Investigative Journalism. 11. Smith and Anderson, “Americans’ Attitudes Towards Driverless Vehicles.” 12. Stenquist, “In Self-Driving Cars, a Potential Lifeline for the Disabled.”

Robotics_2.indd 414

10/2/2020 12:37:33 PM

Notes | 415

Chapter 12 1. “Talos,” Theoi Project. 2. Brouwers, “Talos, the Bronze Guardian.” 3. “Jason & the Argonauts,” PBS. 4. Oreck, “Modern Jewish History: The Golem.” 5. Cassel, “Remembering Shakey, the First Intelligent Robot.” 6. Lampe, “Robot Sentries.” 7. Engelberger, Robotics in Service. 8. Metz, “Rise of the Robot Security Guards.” 9. Angwin, Larson, Mattu, Kirchner, “Machine Bias.” 10. Simon, “The Tricky Ethics of Knightscope’s Crime-Fighting Robots.”

Chapter 13 1. “Backgrounder on the Three Mile Island Accident,” USNRC. 2. Wasserman, “People Died at Three Mile Island.” 3. Lovering, “Radioactive Robot: The Machines That Cleaned Up Three Mile Island.” 4. “Chernobyl Accident 1986,” World Nuclear Association. 5. Spice, “PIONEER Robot Is Dedicated at Chernobyl Sarcophagus.” 6. Beiser, “The Robot Assault on Fukushima.” 7. “The Past, Present and Future of Bomb Disposal Robots,” Government Europa. 8. Allison, “What Does a Bomb Disposal Robot Actually Do?” 9. Leary, “Robot Completes Volcano Exploration.”

Chapter 14 1. Corke, Robotics, Vision and Control: Fundamental Algorithms in MATLAB. 2. Lafranco, Castellanos, Desai, and Meyers, “Robotic Surgery: A Current Perspective.” 3. Watanabe, Robotic Surgery, pp. 25–27. 4. Rosen, Hannaford, and Satava, Surgical Robots: Systems Applications and Visions. 5. McNamee, “Are Robots the Future of Surgery, or a Pricey Marketing Gimmick?” 6. Minsky, “Telepresence.” 7. “Robot Doctor Gets Thumbs-Up from Patients,” NBC News. 8. Warraich, “A ‘Robot’ Doctor Told a Patient He was Dying. It Might Not Be the Last Time.” 9. Lee, “Double’s New Telepresence Robot Now Drives You Around Like You’re a Sim.”

Robotics_2.indd 415

10/2/2020 12:37:34 PM

416 | Notes

Chapter 15 1. Carper, Robots in American Popular Culture, p. 179. 2. Newman, “The 1950s Toy Robot Sensation That Time Forgot.” Fast Company. 2 Dec. 2016. 3. Maxwell, “Amateur Radio: 100 Years of Discovery.” 4. Campbell-Kelly, Computer: A History of the Information Machine. 5. Schmitt, “Robotics Can Get Girls into STEM, but Some Still Need Convincing.” 6. Stone, Gearheads: The Turbulent Rise of Robotic Sports, p. 38. 7. Atherton, “A Brief History of the Demise of Battle Bots.”

Chapter 16 1. O’Hanlon, “Is US Defense Spending Too High, Too Low, or Just Right?” 2. Everett, “A Brief Early History of Unmanned Systems.” 3. Everett, Unmanned Systems of World Wars I and II. 4. Sofge, “Tale of the Teletank: The Brief Rise and Long Fall of Russia’s Military Robots.” 5. Saxena, The Amazing Growth and Journey of UAVs and Ballistic Missile Defense Capabilities. 6. Connor, “The Predator, a Drone That Transformed Military Combat.” 7. Singer, “Drones Don’t Die—A History of Military Robotics.” 8. “Drone Warfare,” The Bureau Investigates. 9. Gibbs, “Elon Musk Leads 116 Experts Calling for Outright Ban of Killer Robots.” 10. Walsh, “Robots Can Outwit Us on the Virtual Battlefield, so Let’s Not Put Them in Charge of the Real Thing.” 11. Carter, “Nobel Lecture.”

Chapter 17 1. “John F. Kennedy Moon Speech,” NASA. 2. Dryer, “Explainer: How Hostile Is Space?” 3. Walker, “The Oh-My-God Particle.” 4. Hall, Lunar Impact: The NASA History of Project Ranger. 5. Harland, NASA’s Moon Program: Paving the Way for Apollo 11. 6. “Planetary Voyage,” NASA. 7. Gill, “Nasa’s Voyager 2 Probe ‘Leaves the Solar System.’” 8. Howell, “Sojourner: Roving on Mars for the First Time.” 9. “Mars Pathfinder,” NASA. 10. “MER Tools of the Trade,” Exploratorium. 11. Mann, “The Mars Rovers.”

Robotics_2.indd 416

10/2/2020 12:37:34 PM

Notes | 417 12. Evon, “Was a Mars Rover’s Final Message to NASA ‘My Battery Is Low and It’s Getting Dark?’” 13. Bartels, “Curiosity Snaps Epic ‘Selfie’ on Mars as It Moves on to New Martian Sights.”

Chapter 18 1. Verbackh, A History of Chess. 2. Markushin, “5 Chess Myths That Most People Believe.” 3. Friedel, “Reconstructing Turing’s ‘Paper Machine.’” 4. Libby, “The Brain Across the Table: Garry Kasparov vs. Deep Blue, 1997.” 5. Anderson, “Twenty Years On from Deep Blue vs. Kasparov: How a Chess Match Started the Big Data Revolution.” 6. Moskowitz, Go Nation: Chinese Masculinities and the Game of Weiqi in China. 7. Muoio, “Why Go Is So Much Harder for AI to Beat Than Chess.” 8. Muoio, “Google’s AI Is So Advanced It Beat a Human at a Game No Computer Has Ever Mastered.” 9. Vincent, “Former Go Champion Beaten by DeepMind Retires after Declaring AI Invincible.”

Chapter 19 1. Orlin, “It’s 2012 Already So Where Are All The Jetsons Flying Cars.” 2. Novak, “Recapping ‘The Jetsons’: Episode 08-Rosey’s Boyfriend.” 3. Beley, “The Quasar Industries’ Robot.” 4. Kleinfield, “Developer Calls It a Robot Butler; Scoffers Say Screws Are Loose.” 5. Ewalt, “When Omnibot Was Hot.” 6. “Omnibot Grows up,” Popular Science. 7. Edwards, “7 Robots Every Geeky 80s Kid Wanted.” 8. “Unlocking the Secrets of Animal Locomotion,” Berkeley. 9. Buchanan, “Object of Interest: The PackBot.” 10. Wiggers, “Sweeping Changes: How iRobot Evolved from Military Robots to Autonomous Vacuums.” 11. “Sony Launches Four-Legged Entertainment Robot,” Sony. 12. Carey, “My Week with Aibo: What It’s Like to Live with Sony’s Robot Dog.”

Chapter 20 1. Padir, “Making Robots That Can Work with Their Hands.” 2. Cangelosi, and Schlesinger, Developmental Robotics.

Robotics_2.indd 417

10/2/2020 12:37:34 PM

418 | Notes 3. “RIP Asimo: A Look Back at the Life of Honda’s Famed Humanoid Robot,” Designboom. 4. Obringer, and Strickland, “How ASIMO Works.” 5. Goodwin, “Honda Debuts Four Very Different Concepts at CES.” 6. Ridden, “Honda’s 3E Robotics Concepts Just Want to Help People.”

Chapter 21 1. “History of the Cylinder Phonograph,” LOC. 2. “’Audrey’—New Electronic Device Developed by Bell Labs,” Signal. 3. Lenzo, et al., “Speech Recognition: Past, Present, and Future.” 4. Moskvitch, “The Machines That Learned to Listen.” 5. Novet and Isaak, “Why Apple’s Siri Isn’t as Smart as Amazon Alexa and Google Assistant.” 6. Hern, “Apple Contractors ‘Regularly Hear Confidential Details’ on Siri Recordings.” 7. Dara, “The Dark Side of Alexa, Siri and Other Personal Digital Assistants.” 8. Estes, “The Terrible Truth about Alexa.” 9. Gralla, “Cortana, Say Goodbye.”

Chapter 22 1. Dave, “Hitchhiking Robot That Made It across Canada Maimed on U.S. Road Trip.” 2. Madrigal, “Meet the Cute, Wellies-Wearing, Wikipedia-Reading Robot That’s Going to Hitchhike across Canada.” 3. Zeller and Smith, “The Hitchbot’s Guide to Traveling across a Continent.” 4. Gabbatt, “Hitchbot’s Decapitators Avoided Capture by the Hitchhiking Android’s Cameras.” 5. Healy, “America Is a Violent Country.” 6. Stone, “Everything You Need to Know about Sophia, the World’s First Robot Citizen.” 7. Weaver, “What Exactly Does It Mean to Give a Robot Citizenship?” 8. Reynolds, “The Agony of Sophia, the World’s First Robot Citizen Condemned to a Lifeless Career in Marketing.”

Chapter 23 1. Walmsley, Mind and Machine, p. 182. 2. Maly, “The Emancipated Cyborg.” 3. “What Are Little Girls Made Of?” Star Trek. 4. Dinello, Technophobia!: Science Fiction Visions of Posthuman Technology. 5. Madrigal, “The Man Who First Said ‘Cyborg,’ 50 Years Later.” 6. Engber, “The Neurologist Who Hacked His Brain—And Almost Lost His Mind.”

Robotics_2.indd 418

10/2/2020 12:37:34 PM

Notes | 419 7. Piore, “To Study the Brain, a Doctor Puts Himself Under the Knife.”

Chapter 24 1. Kerrigan, “The History Behind the 15 Kitchen Appliances That Changed the Way We Live.” 2. Ross, “Who Invented the Refrigerator?” 3. Bramen, “Cooking Through the Ages: A Timeline of Oven Inventions.” 4. Spicer, “The ECHO IV Home Computer: 50 Years Later.” 5. Hernandez, “Before the iPad, There Was the Honeywell Kitchen Computer.” 6. “A Brief History of WiFi: Then and Now,” The Download. 7. Pardes, “The WIRED Guide to the Internet of Things.” 8. “The Security and Privacy Issues That Come with the Internet of Things,” Business Insider. 9. Lewis-Kraus, “Check In with the Velociraptor at the World’s First Robot Hotel.”

Chapter 25 1. “The Impact of Robotic Process Automation on the Automotive Industry,” Medium. 2. Nakanishi, “For US Autoworkers, Robots Are the Job Killers, Not Trade.” 3. “How Robots Change the World,” Oxford Economics. 4. “Robots ‘To Replace up to 20 Million Factory Jobs’ by 2030,” BBC. 5. Davidson, “Automation Could Kill 73 Million U.S. Jobs by 2030.” 6. West, “Brookings Survey Finds 52 Percent Believe Robots Will Perform Most Human Activities in 30 Years.” 7. “How Is Automation Perceived by the Public?” We Forum. 8. Anderson and Raine, “Artificial Intelligence and the Future of Humans.” 9. Sterling, “Millions of Jobs Have Been Lost to Automation. Economists Weigh In on What to Do about It.”

Chapter 26 1. Williams, “What Are the Parts of an Atom?” 2. Biever, “Celebrations as Higgs Boson Is Finally Discovered.” 3. Webb, “Quantum Physics,” NewScientist. 4. Cartwright, “Collapse: Has Quantum Theory’s Greatest Mystery Been Solved?” 5. Ball, “Quantum Physics May Be Even Spookier Than You Think.” 6. Mathas, “The Basics of Quantum Computing—A Tutorial.” 7. Simonite, “The WIRED Guide to Quantum Computing.” 8. Bernardi, “AI Says We Can Teleport Qubits.”

Robotics_2.indd 419

10/2/2020 12:37:34 PM

420 | Notes 9. Perlman, “The History of Using ‘Quantum’ to Mean ‘Really Big.’”

Chapter 27 1. Vinge, “Technological Singularity.” 2. Kurzweil, The Singularly Is Near. 3. Tzezana, “Singularity: Explain It to Me Like I’m 5-Years-Old.” 4. Schweisfurth and Goduscheit, “From the Pyramids to Apollo 11—Can AI Ever Rival Human Creativity?” 5. King, “For How Long Have We Been Human?”

Conclusion 1. Da Costa, “A Technology Gap between the Rich and Poor Is Deepening US Inequality.”

Robotics_2.indd 420

10/2/2020 12:37:34 PM

PRIMARY AND SECONDARY SOURCES Anderson, Mark Robert. “After 75 Years, Isaac Asimov’s Three Laws of Robotics Need Updating.” The Conversation. 17 Mar. 2017, theconversation.com/after-75-years-isaac-asimovs-threelaws-of-robotics-need-updating-74501. “Babbage Machine.” New Ulm Review. Brown County, MN. 30 Sept 1896. Chronicling America: Historic American Newspapers. LOC, Library of Congress, chroniclingamerica.loc.gov/lccn/ sn89081128/1896-09-30/ed-1/seq-7/. Beiser, Vince. “The Robot Assault on Fukushima.” Wired. 26 Apr. 2018, www.wired.com/story/ fukushima-robot-cleanup/. Bernardi, Gabriella. “AI Says We Can Teleport Qubits.” Cosmos. 25 June 2019, cosmosmagazine. com/technology/thanks-to-ai-we-know-we-can-teleport-qubits-in-the-real-world/. “Calculator Is Fast as Light.” Laurel Outlook, 1 May 1946. Chronicling America: Historic American Newspapers. LOC., Library of Congress. chroniclingamerica.loc.gov/lccn/sn86075258/194605-01/ed-1/seq-9. “Callistratus.” Theoi Project. Classical Texts Library. 2017, www.theoi.com/Text/Callistratus. html. Čapek, Karel. “R.U.R. (Rossum’s Universal Robots).” 1920. Project Gutenberg. 22 Mar. 2019, www.gutenberg.org/files/59112/59112-h/59112-h.htm. Carey, Bridget. “My Week with Aibo: What It’s Like to Live with Sony’s Robot Dog.” CNET. 28 Nov. 2018, www.cnet.com/news/my-week-with-aibo-what-its-like-to-live-with-sonys-robotdog/. Cortes, Jan. “‘Star Wars’ Droids and the Science Behind Why They’re Adorable.” Medical Daily. 18 Dec. 2019, www.medicaldaily.com/star-wars-droids-science-behind-why-adorable-447264. Evon, Dan. “Was a Mars Rover’s Final Message to NASA ‘My Battery Is Low and It’s Getting Dark?’” Snopes. 18 Feb. 2019, www.snopes.com/fact-check/opportunity-rovers-final-words/. Gralla, Preston. “Cortana, Say Goodbye.” Computer World. 9 Mar. 2020, www.computerworld. com/article/3531417/cortana-say-goodbye.html. Homer, The Iliad. Translated by William Cowper. Gutenberg Project. 5 Aug. 2005, www.gutenberg. org/files/16452/16452-h/16452-h.htm. Homer, The Odyssey. Translated by Samuel Butler. Gutenberg Project. Apr. 1999, www.gutenberg. org/files/1727/1727-h/1727-h.htm. “How High Is Up?” Evening Star. Washington DC, 8 Sept. 1946. Chronicling America: Historic American Newspapers. LOC, Library of Congress. chroniclingamerica.loc.gov/lccn/ sn83045462/1946-09-08/ed-1/seq-83/.

Robotics_2.indd 421

10/2/2020 12:37:35 PM

422 | Primary And Secondary Sources Lewis-Kraus, Gideon. “Check In with the Velociraptor at the World’s First Robot Hotel” Wired. 2 Mar. 2016, www.wired.com/2016/03/robot-henn-na-hotel-japan/. McNamee, David. “Are Robots the Future of Surgery, or a Pricey Marketing Gimmick?” Medical News Today. 1 Aug. 2014, www.medicalnewstoday.com/articles/280518. Metz, Rachel. “Rise of the Robot Security Guards.” MIT Technology Review. 13 Nov. 2014, www. technologyreview.com/2014/11/13/170454/rise-of-the-robot-security-guards/. Needham, Joseph, and Colin A. Ronan. The Shorter Science and Civilization in China. Cambridge UP, 1978. Newman, Jared. “The 1950s Toy Robot Sensation That Time Forgot.” Fast Company. 2 Dec. 2016, www.fastcompany.com/3066169/the-toy-robot-sensation-that-time-forgot. O’Connell, Sanjida. “What the Tortoise Taught Us.” The Guardian. 6 Dec. 2000, www.theguardian. com/science/2000/dec/07/robots. Padir, Taskin. “Making Robots That Can Work with Their Hands.” The Conversation. 24 Apr. 2017, theconversation.com/making-robots-that-can-work-with-their-hands-73831. “PARRY Encounters the Doctor.” IETF. Network Working Group. 21 Jan. 1973, tools.ietf.org/rfc/ rfc439.txt. Schuchmann, Sebastian. “Probability of an Approaching AI Winter.” Medium. 17 Aug. 2019, towardsdatascience.com/probability-of-an-approaching-ai-winter-c2d818fb338a. Schweisfurth, Tim, and René Chester Goduscheit. “From the Pyramids to Apollo 11—Can AI Ever Rival Human Creativity?” The Conversation. 5 Feb. 2020, theconversation.com/fromthe-pyramids-to-apollo-11-can-ai-ever-rival-human-creativity-131163. Stenquist, Paul. “In Self-Driving Cars, a Potential Lifeline for the Disabled.” New York Times, 7 Nov. 2014, www.nytimes.com/2014/11/09/automobiles/in-self-driving-cars-a-potentiallifeline-for-the-disabled.html. “The Triple Revolution.” Marxists. International Socialist Review, vol. 24, no. 3, Summer 1964, pp. 85–89, www.marxists.org/history/etol/newspape/isr/vol25/no03/adhoc.html. Vincent, James. “Former Go champion Beaten by DeepMind Retires after Declaring AI Invincible.” The Verge, 27 Nov. 2019, www.theverge.com/2019/11/27/20985260/ai-go-alphago-lee-se-dolretired-deepmind-defeat. Walsh, Toby. “Robots Can Outwit Us on the Virtual Battlefield, so Let’s Not Put Them in Charge of the Real Thing.” The Conversation. 31 Oct. 2019, theconversation.com/robots-can-outwitus-on-the-virtual-battlefield-so-lets-not-put-them-in-charge-of-the-real-thing-126104. Weaver, Frank John. “What Exactly Does It Mean to Give a Robot Citizenship?” Slate. 6 Nov. 2017, slate.com/technology/2017/11/what-rights-does-a-robot-get-with-citizenship.html. Webster-Wood, Victoria. “Biohybrid Robots Built from Living Tissue Start to Take Shape.” The

Robotics_2.indd 422

10/2/2020 12:37:35 PM

Primary And Secondary Sources | 423 Conversation. 9 Aug. 2016, theconversation.com/biohybrid-robots-built-from-living-tissuestart-to-take-shape-62759. West, Darrell M. “Brookings Survey Finds 52 Percent Believe Robots Will Perform Most Human Activities in 30 Years.” Brookings. 21 June 2018, https://www.brookings.edu/blog/ techtank/2018/06/21/brookings-survey-finds-52-percent-believe-robots-will-perform-mosthuman-activities-in-30-years/.

Robotics_2.indd 423

10/2/2020 12:37:35 PM

Robotics_2.indd 424

10/2/2020 12:37:35 PM

GLOSSARY A abstraction—a technique used to reduce the structural complexity of programs, making them easier to create, understand, maintain, and use. actuator—a motor designed to control the movement of a device or machine by transforming potential energy into kinetic energy. aerobot—a robot capable of independent flight on other planets. A type of aerial robot. affinity chromatography—a technique for separating a particular biochemical substance from a mixture based on its specific interaction with another substance. agile software development—an approach to software development that addresses changing requirements as they arise throughout the process, with programming, implementation, and testing occurring simultaneously. algorithm—A description in precise but natural language plus mathematical notation of how a problem is solved. Also a set of step-by-step instructions for performing computations. American National Standards Institute (ANSI)—a nonprofit organization that oversees the creation and use of standards and certifications such as those offered by CompTIA. amplifier—a device that strengthens the power, voltage, or current of a signal. analog signal—a continuous signal whose values or quantities vary over time. analytic combinatorics—a method for creating precise quantitative predictions about large sets of objects. Android Open Source Project—a project undertaken by a coalition of mobile phone manufacturers and other interested parties, under the leadership of Google. The purpose of the project is to develop the Android platform for mobile devices. animation variables (avars)—defined variables used in computer animation to control the movement of an animated figure or object.

Robotics_2.indd 425

10/5/2020 11:51:25 AM

426 | Glossary anthropomorphic—resembling a human in shape or behavior; from the Greek words anthropos (human) and morphe (form). application—A program or integrated suite of programs that has a defined function. Arduino—a platform for small-scale robotic experimentation and physical computing. argument—A value, or reference to a value, passed to a function; an actual argument. arithmetic operations—Addition, subtraction, multiplication, and division ideally forming an abstract data type (ADT) with the algebraic properties of a ring or field. array—An ordered sequence of same-typed values whose elements are fast to access by their numerical index in the array. artificial intelligence—the intelligence exhibited by machines or computers, in contrast to human, organic, or animal intelligence. Also the branch of computer science that strives to create it. artificial neural network (ANN)—learning models based on the biological neural networks present in the brains of animals; used to solve tasks too difficult for traditional programming methods. attenuation—the loss of intensity from a signal being transmitted through a medium. attributes—the specific features that define an object’s properties or characteristics. automaton—a machine that mimics a human but is generally considered to be unthinking. autonomic components—self-contained software or hardware modules with an embedded capacity for self-management, connected via input/outputs to other components in the system. autonomous agent—a system that acts on behalf of another entity without being directly controlled by that entity. autonomous vehicle—a vehicle equipped with an autopilot system, which is capable of driving from one point to another without input from a human operator.

Robotics_2.indd 426

10/2/2020 12:37:35 PM

Glossary | 427

B backdoor—a hidden method of accessing a computer system that is placed there without the knowledge of the system’s regular user in order to make it easier to access the system secretly. binary—Pertaining to 2. Binary operators have two operands. Binary numbers have base 2 and use 2 symbols. binding—A relationship between two things, typically an identifier and some one of its properties or attributes. For example a variable is an identifier bound to a piece of storage in the main memory of the computer. bioinformatics—the scientific field focused on developing computer systems and software to analyze and examine biological data. bioinstrumentation—devices that combine biology and electronics in order to interface with a patient’s body and record or monitor various health parameters. biomarker—short for “biological marker”; a measurable quality or quantity (e.g., internal temperature, amount of iron dissolved in blood) that serves as an indicator of an organism’s health, or some other biological phenomenon or state. biomaterials—natural or synthetic materials that can be used to replace, repair, or modify organic tissues or systems. biomechanics—the various mechanical processes such as the structure, function, or activity of organisms. bioMEMS—short for “biomedical micro-electromechanical system”; a microscale or nanoscale self-contained device used for various applications in health care. biometrics—measurements that can be used to distinguish individual humans, such as a person’s height, weight, fingerprints, retinal pattern, or genetic makeup. bionics—the use of biologically based concepts and techniques to solve mechanical and technological problems. biosignal processing—the process of capturing the information the body produces, such as heart rate, blood pressure, or levels of electrolytes, and analyzing it to assess a patient’s status and to guide treatment decisions.

Robotics_2.indd 427

10/2/2020 12:37:35 PM

428 | Glossary black-box testing—a testing technique in which function is analyzed based on output only, without knowledge of structure or coding. broadcast—an audio or video transmission sent via a communications medium to anyone with the appropriate receiver. butterfly effect—an effect in which small changes in a system’s initial conditions lead to major, unexpected changes as the system develops. byte—a group of eight bits.

C carrier signal—an electromagnetic frequency that has been modulated to carry analog or digital information. cathode ray tube (CRT)—a vacuum tube used to create images in devices such as older television and computer monitors. central processing unit (CPU)—electronic circuitry that provides instructions for how a computer handles processes and manages data from applications and programs. chain—[Data structure] Any kind of linked list, a set of records where each record identifies the next record in some sequence or other. chatterbot—a computer program that mimics human conversation responses in order to interact with people through text; also called “talkbot,” “chatbot,” or simply “bot.” clinical engineering—the design of medical devices to assist with the provision of care. cobot—abbreviation of collaborative robot; designed to safely collaborate with, or work alongside, humans. code—[Noun] A piece of text that can not be understood without a key, hence the source code for a program. coding theory—the study of codes and their use in certain situations for various applications. command-line interpreter—an interface that interprets and carries out commands

Robotics_2.indd 428

10/2/2020 12:37:36 PM

Glossary | 429 entered by the user. communication architecture—the design of computer components and circuitry that facilitates the rapid and efficient transmission of signals between different parts of the computer. communication devices—devices that allow drones to communicate with users or engineers in remote locations. compliance—adherence to standards or specifications established by an official body to govern a particular industry, product, or activity. component—[Technology] A unit of composition with contractually specified interfaces and only explicit context dependencies; components can be deployed and composed by third parties, often a collection of objects with a given set of methods for handling them and abstract classes that can be defined by other people. computational linguistics—a branch of linguistics that uses computer science to analyze and model language and speech. constraints—limitations on values in computer programming that collectively identify the solutions to be produced by a programming problem. converter—a device that expands a system’s range of reception by bringing in and adapting signals that the system was not originally designed to process. cookies—small data files that allow websites to track users. cruise missile—a robot-controlled guided missile that carries an explosive payload. cyborg—also known as a cybernetic organism, a being with both biological and artificial (e.g. electronic, mechanical or robotic) parts.

D data granularity—the level of detail with which data is collected and recorded. data integrity—the degree to which collected data is and will remain accurate and consistent. data mining—process by which patterns are discovered within large sets of data with

Robotics_2.indd 429

10/2/2020 12:37:36 PM

430 | Glossary the goal of extracting useful information. data source—the origin of the information used in a computer model or simulation, such as a database or spreadsheet. data type—A collection of values together with the operations that use them and produce them, plus the assumptions that can be made about the operations and values. data width—a measure of the amount of data that can be transmitted at one time through the computer bus, the specific circuits and wires that carry data from one part of a computer to another. datapath design—describes how data flows through the CPU and at what points instructions will be decoded and executed. deep learning—an emerging field of artificial intelligence research that uses neural network algorithms to improve machine learning. deterministic algorithm—an algorithm that when given a particular input will always produce the same output. device fingerprinting—information that uniquely identifies a particular computer, component, or piece of software installed on the computer. This can be used to find out precisely which device accessed a particular online resource. dexterity—finesse; skill at performing delicate or precise tasks. digital native—an individual born during the digital age or raised using digital technology and communication. distributed algorithm—an algorithm designed to run across multiple processing centers and so is capable of directing a concentrated action between several computer systems. domain—the range of values that a variable may take on, such as any even number or all values less than −23.7. domain-dependent complexity—a complexity that results from factors specific to the context in which the computational problem is set.

Robotics_2.indd 430

10/2/2020 12:37:36 PM

Glossary | 431

E electromagnetic spectrum—the complete range of electromagnetic radiation, from the longest wavelength and lowest frequency (radio waves) to the shortest wavelength and highest frequency (gamma rays). electronic interference—the disturbance generated by a source of electrical signal that affects an electrical circuit, causing the circuit to degrade or malfunction. Examples include noise, electrostatic discharge, and near-field and far-field interference. entanglement—the phenomenon in which two or more particles’ quantum states remain linked even if the particles are later separated and become part of distinct systems.

F field programmable gate array—an integrated circuit that can be programmed in the field and can therefore allow engineers or users to alter a machine’s programming without returning it to the manufacturer. filter—in signal processing, a device or procedure that takes in a signal, removes certain unwanted elements, and outputs a processed signal.

G genetic modification—direct manipulation of an organism’s genome, often for the purpose of engineering useful microbes or correcting for genetic disease. genome-wide association study—a type of genetic study that compares the complete genomes of individuals within a population to find which genetic markers, if any, are associated with various traits, most often diseases or other health problems. gestures—combinations of finger movements used to interact with multitouch displays in order to accomplish various tasks. Examples include tapping the finger on the screen, double-tapping, and swiping the finger along the screen. grammar—[Math] A set of definitions that define the syntax of a language. A grammar generates the strings in the language and so implicitly describes how to recognize and parse strings in the language.

Robotics_2.indd 431

10/2/2020 12:37:36 PM

432 | Glossary graphical user interface (GUI)—an interface that allows users to control a computer or other device by interacting with graphical elements such as icons and windows.

H hacking—the use of technical skill to gain unauthorized access to a computer system; also, any kind of advanced tinkering with computers to increase their utility. haptic—tactile feedback technology using the operator’s sense of touch. Also sometimes applied to robot manipulators with their own touch sensitivity. hardware—the physical parts that make up a computer. These include the motherboard and processor, as well as input and output devices such as monitors, keyboards, and mice. hash function—an algorithm that converts a string of characters into a different, usually smaller, fixed-length string of characters that is ideally impossible either to invert or to replicate. hashing algorithm—a computing function that converts a string of characters into a different, usually smaller string of characters of a given length, which is ideally impossible to replicate without knowing both the original data and the algorithm used. heuristics—in AI, rules drawn from experience to quickly solve a problem; usually less optimal than the classic methods it replaces. Human Brain Project—a project launched in 2013 in an effort at modeling a functioning brain by 2023; also known as HBP. humanoid—resembling a human. hybrid cloud—a cloud computing model that combines public cloud services with a private cloud platform linked through an encrypted connection.

I industrial robot—A reprogrammable, multifunctional manipulator designed to move material, parts, tools, or specialized devices through variable programmed motions for the performance of a variety of tasks.

Robotics_2.indd 432

10/2/2020 12:37:36 PM

Glossary | 433 imitation game—Alan Turing’s name for his proposed test, in which a machine would attempt to respond to questions in such a way as to fool a human judge into thinking it was human. information technology—the use of computers and related equipment for the purpose of processing and storing data. interference—anything that disrupts a signal as it moves from source to receiver. Internet of things—a wireless network connecting devices, buildings, vehicles, and other items with network connectivity. IRL relationships—relationships that occur “in real life,” meaning that the relationships are developed or sustained outside of digital communication.

K keyframing—a part of the computer animation process that shows, usually in the form of a drawing, the position and appearance of an object at the beginning of a sequence and at the end.

L linear predictive coding—a popular tool for digital speech processing that uses both past speech samples and a mathematical approximation of a human vocal tract to predict and then eliminate certain vocal frequencies that do not contribute to meaning. This allows speech to be processed at a faster bit rate without significant loss in comprehensibility. LISP—LISt Processing language, The key versions are LISP1.5, CLOS, and Scheme.

M machine code—System of instructions and data directly understandable by a computer’s central processing unit. meta-complexity—a complexity that arises when the computational analysis of a problem is compounded by the complex nature of the problem itself. metadata—data that contains information about other data, such as author

Robotics_2.indd 433

10/2/2020 12:37:36 PM

434 | Glossary information, organizational information, or how and when the data was created. micron—a unit of measurement equaling one millionth of a meter; typically used to measure the width of a core in an optical figure or the line width on a microchip. microwaves—electromagnetic radiation with a frequency higher than that of radio wave but lower than that of visible light. modeling—the process of creating a 2-D or 3-D representation of the structure being designed. morphology—a branch of linguistics that studies the forms of words. multitouch gestures—combinations of finger movements used to interact with touchscreen or other touch-sensitive displays in order to accomplish various tasks. Examples include double-tapping and swiping the finger along the screen.

N natural language processing (NLP)—the ability of computers to understand or process natural human languages; typically involves machine interpretation of text or speech recognition. nervous (neural) system—the system of nerve pathways by which an organism senses changes in itself and its environment and transmits electrochemical signals describing these changes to the brain so that the brain can respond. neural network—in computing, a model of information processing based on the structure and function of biological neural networks such as the human brain. neuroplasticity—the capacity of the brain to change as it acquires new information and forms new neural connections.

P pedagogy—a philosophy of teaching that addresses the purpose of instruction and the methods by which it can be achieved. phoneme—a sound in a specified language or dialect that is used to compose words or to distinguish between words.

Robotics_2.indd 434

10/2/2020 12:37:36 PM

Glossary | 435 prosthetic robot—programmable manipulator or device for missing human limbs. proprietary software—software owned by an individual or company that places certain restrictions on its

Q quantum bit (qubit)—a basic unit of quantum computation that can exist in multiple states at the same time, and can therefore have multiple values simultaneously. quantum logic gate—a device that alters the behavior or state of a small number of qubits for the purpose of computation.

R radio waves—low-frequency electromagnetic radiation, commonly used for communication and navigation. remote manipulator—A manipulator under direct human control, often used for work with hazardous materials. remote monitoring—a platform that reviews the activities on software or systems that are located off-site. rendering—the process of transforming one or more models into a single image; the production of a computer image from a 2-D or 3-D computer model; the process of selecting and displaying glyphs. Robonaut—a development project conducted by NASA to create humanoid robots capable of using space tools and working in similar environments to suited astronauts.

S semantics—A description of how the meaning of a valid statement or sentence can be worked out from its parsed form. service robot—machine that extends human capabilities. Singularity, the—a hypothetical point in time when technological growth becomes uncontrollable and irreversible, resulting in unforeseeable changes to human civilization.

Robotics_2.indd 435

10/2/2020 12:37:36 PM

436 | Glossary string—A data type for a sequence of characters such as letters of English alphabet. strong AI—area of AI development working toward making AI systems that are as useful and skilled as the human mind. supercomputer—an extremely powerful computer that far outpaces conventional desktop computers. superposition—the principle that two or more waves, including waves describing quantum states, can be combined to give rise to a new wave state with unique properties. This allows a qubit to potentially be in two states at once. surgical robot—a remote manipulator used for keyhole surgery. syntax—a branch of linguistics that studies how words and phrases are arranged in sentences to create meaning.

T telemetry—automated communication process that allows a machine to identify its position relative to external environmental cues. temporal synchronization—the alignment of signals from multiple devices to a single time standard, so that, for example, two different devices that record the same event will show the event happening at the exact same time. Three Laws of Robotics—coined by the science fiction author Isaac Asimov, one of the first serious considerations of the ethics and robopsychological aspects of robotics. Turing complete—a programming language that can perform all possible computations. Turing test—a test of a machine’s ability to exhibit intelligent behavior indistinguishable from a human.

U Unimate—the first off-the-shelf industrial robot, c 1961. unmanned aerial vehicle (UAV)—an aircraft that does not have a pilot onboard but typically operates through remote control, automated flight systems, or preprogrammed computer instructions.

Robotics_2.indd 436

10/2/2020 12:37:37 PM

Glossary | 437

V virtual reality—the use of technology to create a simulated world into which a user may be immersed through visual and auditory input.

W weak AI—also known as narrow AI; refers to non-sentient computer systems that operate within a predetermined range of skills and usually focus on a singular task or small set of tasks. Most AI in use today is weak AI.

Robotics_2.indd 437

10/2/2020 12:37:37 PM

Robotics_2.indd 438

10/2/2020 12:37:37 PM

HISTORICAL SNAPSHOTS 1880–1881 • • • • • • • • • • • • • • • • • • • •

The plush Del Monte Hotel in Monterey, California, opened The country claimed 93,000 miles of railroad Halftone photographic illustrations appeared in newspapers for the first time Midwest farmers burned their corn for fuel as prices were too low to warrant shipping President James A. Garfield was assassinated The Diamond Match Company was created Marquette University was founded in Milwaukee Barnum & Bailey’s Circus was created through the merger of two companies Chicago meatpacker Gustavus F. Swift perfected the refrigeration car Josephine Cockrane of Illinois invented the first mechanical dishwasher A U.S. Constitutional amendment to grant full suffrage to women was introduced in Congress this and every year until its passage in 1920 Thanks to high tariffs, the U.S. Treasury had a surplus of $145 million The U.S. had 2,400 magazines and daily newspapers, and 7,500 weekly newspapers The typewriter and the telephone were both novelties at the 1876 Centennial in Philadelphia; in 1880, 50,000 telephones existed nationwide and at the turn of the century, that number tripled George Eastman’s famous slogan “You Push the Button, We Do the Rest” helped make Kodak camera a part of many American homes

1885 • • • • • • • • • •

Robotics_2.indd 439

The Canadian Pacific Railroad reached the Pacific Ocean Baseball set players’ salaries at $1,000-$2,000 for the 1885 season The first photograph of a meteor was taken Dr. William W. Grant of Davenport, Iowa, performed the first appendectomy Bachelor Grover Cleveland entered the White House as president Mark Twain’s Adventures of Huckleberry Finn was published The Washington Monument was dedicated The U.S. Post Office began offering special delivery for first-class mail The Salvation Army was officially organized in the U.S. Texas was the last Confederate state readmitted to the Union

10/2/2020 12:37:37 PM

440 | Historical Snapshots • • • •

Louis Pasteur successfully tested an anti-rabies vaccine on a boy bitten by an infected dog Leo Daft opened America’s first commercially operated electric streetcar in Baltimore In the Wyoming Territory, 28 Chinese laborers were killed and hundreds more chased out of town by striking coal miners The first gasoline pump was delivered to a gasoline dealer in Ft. Wayne, Indiana

1890–1891 • • • • • • • • • • • •

• • • •

Massive immigration transformed the nation, largely unaffecting rural South Irish women immigrants to America, in demand as servants, outnumbered men Literary Digest began publication Restrictive anti-black “Jim Crow” laws were enacted throughout the South American Express Traveler’s Cheque was copyrighted Ceresota flour was introduced by the Northwest Consolidated Milling Company George A. Hormel & Co. introduced the packaged food Spam The International Brotherhood of Electrical Workers was organized Bicycle designer Charles Duryea and brother James designed a gasoline engine capable of powering a road vehicle Edouard Michelin obtained a patent for a “removable” bicycle tire that could be repaired quickly in the event of puncture Rice University and Stanford were chartered Bacteriologist Anna Williams obtained her M.D. from the Women’s Medical College of New York and worked in the diagnostic laboratory of the city’s Health Department, the first such lab in America Chicago’s Provident Hospital became the first interracial hospital in America Irene Coit became the first woman admitted to Yale University The electric self-starter for automobiles was patented Important books included Tess of the d’Urbervilles by Thomas Hardy; The Light That Failed by Rudyard Kipling; The Picture of Dorian Gray by Oscar Wilde, and Tales of Soldiers and Civilians by Ambrose Bierce

1895 • •

Robotics_2.indd 440

Mintonette, later called volleyball, was created by William G. Morgan in Holyoke, Massachusetts Oscar Wilde’s last play, The Importance of Being Earnest, was first shown at St. James’s Theatre in London

10/2/2020 12:37:37 PM

Historical Snapshots | 441 • • • • • • • • •

The first professional American football game was played in Latrobe, Pennsylvania, between the Latrobe YMCA and the Jeannette Athletic Club Rudyard Kipling published the story “Mowgli Leaves the Jungle Forever” in Cosmopolitan illustrated magazine George B. Selden was granted the first U.S. patent for an automobile Wilhelm Röntgen discovered a type of radiation later known as x-rays Oscar Hammerstein opened the Olympia Theatre, the first in New York City’s Times Square district Alfred Nobel signed his last will and testament, setting aside his estate to establish the Nobel Prize after his death The Anti-Saloon League of America was formed in Washington D.C. The London School of Economics and Political Science was founded in England W. E. B. Du Bois became the first African-American to receive a Ph.D. from Harvard University

1900 • • • • • • • • •

President William McKinley used the telephone to help his re-election Hamburgers were introduced by Louis Lassen in New Haven, Connecticut The cost of telephone service fell as more companies offered a 10-party line Cigarette smoking was popular and widely advertised Excavation had begun on the New York subway system The U.S. College Entrance Examination Board was formed to screen college applicants using a Scholastic Aptitude Test Puerto Rico was declared a U.S. territory A tidal wave in Galveston, Texas, killed 4,000 people The U.S. Navy bought its first submarine

1905–1906 • • • • •

Robotics_2.indd 441

Industrial Workers of the World (IWW) attacked the American Federation of Labor for accepting the capitalist system A New York law limiting hours of work in the baking industry to 60 per week was ruled unconstitutional by the Supreme Court Oklahoma was admitted to the Union Planters Nut and Chocolate Company was created Samuel Hopkins Adams’ The Great American Fraud exposed the fraudulent claims of many patent medicines

10/2/2020 12:37:37 PM

442 | Historical Snapshots • •



Anti-liquor campaigners received powerful support from the Woman’s Christian Temperance Union, led by Frances E. Willard President Grover Cleveland wrote in The Ladies’ Home Journal that women of sense did not wish to vote: “The relative positions to be assumed by men and women in the working out of our civilizations were assigned long ago by a higher intelligence than ours.” President Theodore Roosevelt admonished well-born white women who were using birth control for participating in willful sterilization, a practice known as racial suicide

1910–1911 • • • • • • • • • •

Women’s Wear Daily began publication in New York 70 percent of bread was baked at home, down from 80 percent in 1890 Father’s Day and the Boy Scouts of America made their first appearances The concept of the “weekend” as a time of rest gained popularity Actress Blanche Sweet was one of D.W. Griffith’s regulars in the one- and two-reelers that dominated the movie industry The Triangle Shirtwaist factory fire, deadly because the single exit door was locked to prevent theft, brought demands for better working conditions California women gained suffrage by constitutional amendment The use of fingerprinting in crime detection became widespread Marie Curie won an unprecedented second Nobel Prize, but was refused admission to the French Academy of Science The divorce rate climbed to one in 12 marriages, from one in 85 in 1905

1915–1916 •

• • • • • •

Robotics_2.indd 442

An attempt by Congress to exclude illiterates from immigrating, a bill promoted by the unions to protect jobs, was vetoed by President Howard Taft in 1913, reasoning that illiteracy was no test of character The Woman’s Peace Party was founded with social worker Jane Addams, the founder of Hull House in Chicago, as its first president A divorce law requiring only six months of residence was passed in Nevada A Chicago law restricted liquor sales on Sunday American Tobacco Company selected salesmen by psychological tests Railway workers gained the right to an eight-hour day, preventing a nationwide strike Margaret Sanger opened the first birth control clinic in the country, distributing information in English, Italian and Yiddish

10/2/2020 12:37:37 PM

Historical Snapshots | 443 • • • •

The Mercury dime and Liberty fifty-cent piece went into circulation High school dropout Norman Rockwell published his first illustration in The Saturday Evening Post South Carolina raised the minimum working age of children from 12 to 14 Stanford Terman introduced a test for measuring intelligence, coining the term “IQ” for intelligence quotient

1919–1920 • • • • • • • •



Boston police struck against pay scales of $0.21 to $0.23 per hour for 83- to 98-hour weeks. The dial telephone was introduced in Norfolk, Virginia U.S. ice cream sales reached 150 million gallons, up from 30 million in 1909 The New York Daily News became the first tabloid (small picture-oriented) newspaper Boston Red Sox pitcher and outfielder Babe Ruth hit 29 home runs for the year and the New York Yankees purchased his contract for $125,000 More than four million American workers struck for the right to belong to unions The Bureau of Labor Statistics reported that 1.4 million women had joined the American work force since 1911 Following the 1918 strike by the Union Streetcar Conductors protesting the employment of female conductors, the War Labor Board ruled in favor of the continued employment of women Southern leaders of the National Association of Colored Women protested the conditions of domestic service workers, including the expectation of white male employers of the right to take sexual liberties with their servants

1925-1926 • • • • • •

Robotics_2.indd 443

James Buchanan “Buck” Duke donated $47 million to Trinity College at Durham, North Carolina and the college changed its name to Duke College football surpassed boxing as a national pastime With prohibition the law of the land, party-goers hid liquor in shoe heels, flasks formfitted to women’s thighs, and perfume bottles The Charleston, a dance that originated in Charleston, South Carolina, was carried north and incorporated into the all-black show Shuffle Along The U.S. Supreme Court declared unconstitutional an Oregon law that required all grammar school-aged children to attend school The Methodist Episcopal General Conference lifted its ban on theatre attendance and dancing

10/2/2020 12:37:37 PM

444 | Historical Snapshots • • • • • • • • • • • • • • • • •

Walt Disney began creating cartoons, featuring “Alice’s Wonderland” Al Capone took control of Chicago bootlegging Chesterfield cigarettes were marketed to women for the first time The first ham in a can was introduced by Hormel Cars appeared in such colors as “Florentine Cream” and “Versailles Violet” Earl Wise’s potato chips were so successful he moved his business from a remodeled garage to a concrete plant To fight depression in the automobile industry, Henry Ford introduced the eight-hour day and five-day work week With prohibition under way, the Supreme Court upheld a law limiting the medical prescription of whiskey to one pint every 10 days The illegal liquor trade netted $3.5 billion a year The movies became America’s favorite entertainment, with more than 14,500 movie houses showing 400 movies a year True Story Magazine reached a circulation of two million with stories such as “The Diamond Bracelet She Thought Her Husband Didn’t Know About” 40 percent of all first-generation immigrants owned their own homes, while 29 percent of all second-generation immigrants were homeowners Sinclair Lewis refused to accept the Pulitzer Prize because it “makes the writer safe, polite, obedient, and sterile” Martha Graham debuted in New York as choreographer and dancer in Three Gopi Maidens The Jazz Singer, the first talking film, made its debut Women’s skirts, the shortest of the decade, were just below the knee with flounces, pleats, and circular gores that extended from the hip Ethel Lackie of the Illinois Athletic Club broke the world’s record for the 40-yard freestyle swim with a time of 21.4 seconds

1930–1931 • • • • • •

Robotics_2.indd 444

The car boom collapsed after the Depression and one million auto workers were laid off Trousers became acceptable attire for women who played golf and rode horses Radio set sales increased to 13.5 million Boeing hired eight nurses to act as flight attendants Fortune Magazine was launched by Henry R. Luce at $1.00 per issue The University of Southern California polo team refused to play against the UCLA until its one female member was replaced by a male

10/2/2020 12:37:38 PM

Historical Snapshots | 445 • • • • • • • • • •

Laurette Schimmoler of Ohio became the first woman airport manager, earning a salary of $510 a year Alka-Seltzer was introduced by Miles Laboratories Clairol hair products were introduced by U.S. chemists For the first time, emigration exceeded immigration More than 75 percent of all cities banned the employment of wives The National Forty-Hour Work Week League formed, calling for an eight-hour workday in an effort to produce more jobs To generate income, Nevada legalized both gambling and the six-month divorce Chicago gangster Al Capone was convicted of evading $231,000 in federal taxes New York’s Waldorf-Astoria Hotel was opened Silent film extra Clark Gable appeared in the movie A Free Soul, gaining instant stardom, while Universal studios recruited actress Bette Davis

1935–1936 • • • • • • • • • • • • • • •

Robotics_2.indd 445

The Social Security Act passed Congress The Emergency Relief Appropriation Act gave $5 billion to create jobs Fort Knox became the United States Repository of gold bullion Nylon was developed by Du Pont New York State law allowed women to serve as jurors An eight-hour work day became law in Illinois A Fortune poll indicated that 67 percent favored birth control Trailer sales peaked; tourist camps for vacationing motorists gained popularity Ford unveiled the V-8 engine Recent advances in photography, like 35 mm camera and easy-to-use exposure meters, fueled a photography boom Life magazine began publication, with a claim that one in 10 Americans had a tattoo The National Park Service created numerous federal parks and fish and game preserves, adding a total of 600,000 additional acres to state preserves Mercedes-Benz created the first diesel-fueled passenger car A revolt against progressive education was led by Robert M. Hutchins, president of the University of Chicago Molly Dewson of the National Consumers’ League led a fight to gain the appointment of more female postmasters

10/2/2020 12:37:38 PM

446 | Historical Snapshots

1940 • • • • • • •

• • • • •

RKO released Walt Disney’s second full-length animated film, Pinocchio Truth or Consequences debuted on NBC Radio Booker T. Washington became the first African-American to be depicted on a U.S. postage stamp McDonald’s restaurant opened in San Bernardino, California President Franklin D. Roosevelt asked Congress for approximately $900 million to construct 50,000 airplanes per year The Auschwitz-Birkenau concentration and death camp opened in Poland WW I General John J. Pershing, in a nationwide radio broadcast, urged aid to Britain in order to defend America, while national hero Charles Lindbergh led an isolationist rally at Soldier Field in Chicago Nazi Germany rained bombs on London for 57 consecutive nights The Selective Training and Service Act of 1940 created the first peacetime draft in U.S. history The U.S. imposed a total embargo on all scrap metal shipments to Japan Franklin D. Roosevelt defeated Republican challenger Wendell Willkie to become the first and only third-term president Agatha Christie’s mystery novel And Then There Were None was published

1945 • • • • • • • • • •



Robotics_2.indd 446

President Franklin Delano Roosevelt died in office and Harry Truman became president WW II ended Penicillin was introduced commercially Strikes idled 4.6 million workers, the worst stoppage since 1919 The Dow Jones Industrial Average peaked at a post-1929 high of 212.50 U.S. college enrollments reached an all-time high of more than 2 million Ektachrome color film was introduced by Kodak Company Hunt Foods established “price at time of shipment” contracts with customers The U.S. birth rate soared to 3.4 million, up from 2.9 million in the previous year New York State forbade discrimination by employers, employment agencies and labor unions on the basis of race, the first time in American history a legislative body enacted a bill outlawing discrimination based on race, creed, or color The Boy Scouts collected 10 million pounds of rubber and more than 370 million pounds of scrap metal during the war, while Chicago children collected 18,000 tons of newspapers in just five months

10/2/2020 12:37:38 PM

Historical Snapshots | 447

1950–1951 • • • • • • • • • • • • • • • • • • • •

• •

The Korean War began Congress increased personal and corporate income taxes President Truman ordered the Atomic Energy Committee to develop the hydrogen bomb Boston Red Sox Ted Williams became baseball’s highest paid player with a $125,000 contract Senator Joseph McCarthy announced that he had the names of 205 known Communists working in the State Department The FBI issued its first list of the Ten Most Wanted Criminals The first kidney transplant was performed on a 49-year old woman in Chicago Charles M. Schultz’s comic strip, Peanuts, debuted in eight newspapers M&M candy was stamped with an “M” to assure customers of the real thing The first Xerox copy machine was introduced The 22nd Amendment to the Constitution, limiting the term of the president to two terms, was adopted Univak, the first general-purpose electronic computer, was dedicated in Philadelphia CBS introduced color television in a program hosted by Ed Sullivan and Arthur Godfrey Jet news magazine was launched Harvard Law School admitted women for the first time H&R Block, in Kansas City, began offering tax preparation services when the IRS stopped preparing people’s taxes Margaret Sanger urged the development of an oral contraceptive For the first time in history, women outnumbered men in the U.S. Julius and Ethel Rosenberg were sentenced to death for espionage against the U.S. President Truman dispatched an air force plane when Sioux City Memorial Park in Iowa refused to bury John Rice, a Native American who had died in combat; his remains were interred in Arlington National Cemetery Charles F. Blair flew solo over the North Pole Entertainer Milton Berle signed a 30-year, million-dollar-plus contract with NBC

1954–1955 • • • •

Robotics_2.indd 447

The Supreme Court declared racial segregation in public schools illegal Open-heart surgery was introduced by Minneapolis physician C. Walton Lillehe RCA introduced the first color television set The $13 million, 900-room Fontainebleau Hotel opened at Miami Beach

10/2/2020 12:37:38 PM

448 | Historical Snapshots • • • • • • • • • • • •

Sports Illustrated Magazine was introduced Dr. Jonas E. Salk, U.S. developer of anti-polio serum, started inoculating school children in Pittsburgh, Pennsylvania Marian Anderson, the first black soloist of the Metropolitan Opera, appeared as Ulrica in Un Ballo in Maschera Blacks in Montgomery, Alabama, boycotted segregated city bus lines, and Rosa Parks was arrested for refusing to give up her seat in the front of the bus National Review and Village Voice began publication HEW Secretary Oveta Culp Hobby opposed the free distribution of the Salk vaccine to poor children as “socialized medicine by the back door” Smog and poisoned air became a public concern Jacqueline Cochran became the first woman to fly faster than the speed of sound Racial segregation on interstate buses and trains was ordered to end The AFL and CIO merged, with George Meany as president The Dow Jones Industrial Average hit a high of 488, and a low of 391 The Ford Foundation gave $500 million to colleges and universities nationwide

1960 •

• • • • • • • • • •

Robotics_2.indd 448

Four students from NC Agricultural and Technical State University in Greensboro, began a sit-in at a segregated Woolworth’s lunch counter, which triggered similar nonviolent protests throughout the southern U.S. Joanne Woodward received the first star on the Hollywood Walk of Fame The U.S. announced that 3,500 American soldiers would be sent to Vietnam The U.S. launched the first weather satellite, TIROS-1 Ben Hur won the Oscar for Best Picture A Soviet missile shot down an American spy plane; pilot Francis Gary Powers was captured and released 21 months later in a spy swap with the U.S. President Dwight D. Eisenhower signed the Civil Rights Act of 1960 into law The U.S. FDA approved birth control as an additional indication for the drug Searle’s Enovid, making it the world’s first approved oral contraceptive pill Harper Lee released her critically acclaimed novel To Kill a Mockingbird Presidential candidates Richard M. Nixon and John F. Kennedy participated in the first televised presidential debate President Eisenhower authorized the use of $1 million toward the resettlement of Cuban refugees, who were arriving in Florida at the rate of 1,000 a week

10/2/2020 12:37:38 PM

Historical Snapshots | 449 •

The U.S. Supreme Court declared in Boynton v. Virginia that segregation on public transit was illegal

1965 • • • • •

“Flower Power” was coined by Allen Ginsburg at a Berkeley antiwar rally The U.S. Immigration Bill abolished national origin quotas The Voting Rights Act, which eliminated literacy tests and provided federal oversight in elections, stimulated a dramatic increase in voting by African-Americans The U.S. Supreme Court struck down a Connecticut statute forbidding the use of contraceptives and eliminated state and local film censorship After extended hearings on cigarette smoking, Congress required package warning: “Caution: Cigarette smoking may be hazardous to your health”

1970 • • • • • • • • • • •

Black Sabbath’s debut album, regarded as the first heavy metal album, was released The Nuclear Non-Proliferation Treaty went into effect, after ratification by 56 nations Earth Day was proclaimed by San Francisco Mayor Joseph Alioto Paul McCartney announced the disbanding of the Beatles, as their twelfth album, Let It Be, was released Four students at Kent State University in Ohio were killed and nine wounded by Ohio National Guardsmen during a protest against the U.S. incursion into Cambodia The U.S. promoted its first female generals: Anna Mae Hays and Elizabeth P. Hoisington The Women’s Strike for Equality took place down Fifth Avenue in New York City The first New York City Marathon took place Garry Trudeau’s comic strip Doonesbury debuted in dozens of U.S. newspapers The North Tower of the World Trade Center was the tallest building in the world at 1,368 feet Alvin Toffler published his book Future Shock

1974–1975 • • • • • •

Robotics_2.indd 449

The pocket calculator was marketed The universal product code was designed for the supermarket industry 3M developed Post-it stock to stick paper to paper The first desktop microcomputer became available The Equal Opportunity Act forbade discrimination based on sex or marital status Minnesota became the first state to require businesses, restaurants, and institutions to establish no-smoking areas

10/2/2020 12:37:38 PM

450 | Historical Snapshots • • • • • • • • • •

Top films were Towering Inferno, Earthquake, and The Exorcist McDonald’s opened its first drive-through restaurants AT&T, the world’s largest private employer, banned discrimination against homosexuals Harvard changed its five-to-two male to female admissions policy to equal admissions The Atomic Energy Commission was dissolved The Supreme Court ruled that the mentally ill cannot be hospitalized against their will unless they are dangerous to themselves or to others Chrysler, and other auto companies, offered rebates to counter record low sales The Rolling Stones tour grossed $13 million, and singer Stevie Wonder signed a record contract for $13 million A Massachusetts physician was convicted of manslaughter for aborting a fetus Rape laws in nine states now required less corroborative evidence necessary for conviction and restricted trial questions regarding the victim’s past sex life

1980 • • • • • • • • •

Yellow ribbons became a symbol of American concern for the hostages in Iran The divorce rate had grown from one in three marriages in 1970 to one in two The World Health Organization announced that smallpox had been eradicated A 10-year study correlated fatal heart disease to saturated-unsaturated fats The prime rate hit 21 percent, and gold was $880 per ounce Veteran’s Administration study showed Vietnam vets suffered more emotional, social, educational and job-related problems than other veterans Top albums of the year included Pink Floyd’s The Wall, Blondie’s Eat to the Beat, Off the Wall by Michael Jackson and Glass Houses by Billy Joel Researchers at the University of California, San Diego, reported that “passive smoking” can lead to lung cancer The “Stop Handguns Before They Stop You” Committee reported: “Last year handguns killed 48 people in Japan, 8 in Great Britain, 34 in Switzerland, 52 in Canada, 58 in Israel, 21 in Sweden, 42 in West Germany, 10,720 in U.S. God Bless America”

1985 • • • •

Robotics_2.indd 450

The U.S. Army ruled that male officers were forbidden to carry umbrellas Highly addictive, inexpensive cocaine derivative “crack” became popular Parents and schools fought over AIDS-afflicted children being in public schools The Nobel Peace Prize went to the International Physicians for the Prevention of Nuclear War, founded by cardiologists at Harvard and in Moscow

10/2/2020 12:37:38 PM

Historical Snapshots | 451 • •

The Supreme Court upheld affirmative-action hiring quotas Rock Hudson became one of the first public figures to acknowledge his battle with AIDS, raising public awareness of the disease

1990 • • • • • • • • • •

The Food and Drug Administration approved a low-calorie fat substitute The Hubble space telescope was launched into orbit Dieting became a $33 billion industry John J. Audubon’s book, Birds of America, sold for $3.96 million at auction Television premieres included The Simpsons, Law and Order, Twin Peaks and Seinfeld Women constituted 11 percent of U.S. military troops, up from three percent in 1973 An EPA report claimed that 3,800 people died annually from second-hand smoke Dances with Wolves was named the Academy Awards’ best picture; Pretty Woman, Total Recall, Goodfellas and Home Alone were also released The stock market hit a high of 2,999.75 President Bush and Premier Gorbachev called for Iraqi withdrawal following its invasion of Kuwait

1995–1996 • • • • • • • • • • • •

Robotics_2.indd 451

Supreme Court ruled that only a constitutional amendment can enforce term limits on Congress The Dow Jones Industrial Average peaked at 5,216 After 130 years, Mississippi lawmakers ratified the 13th Amendment abolishing slavery About 55 percent of women provided half or more of household income New York became the 38th state to reinstate capital punishment The 25th anniversary of Earth Day was celebrated The U.S. banned the manufacture of freon due to its effect on the ozone layer Sheik Omar Abdel-Rahman and nine followers were handed long prison sentences for plotting to blow up New York-area landmarks France detonated its sixth and most powerful nuclear bomb Congress voted to rewrite the 61-year-old Communications Act, freeing television, telephone, home computer industries to cross into each other’s fields World chess champion Garry Kasparov beat IBM supercomputer “Deep Blue,” winning a six-game match in Philadelphia Dr. Jack Kevorkian was acquitted of assisted suicide for helping two suffering patients kill themselves

10/2/2020 12:37:38 PM

452 | Historical Snapshots • • •

Liggett became the first tobacco company to acknowledge that cigarettes are addictive and cause cancer The first of the Nixon White House tapes concerning Watergate were released The Senate passed an immigration bill to tighten border controls, making it tougher for illegal immigrants to get U.S. jobs, and curtail legal immigrants’ access to social services

2000 • • • • • • • • •

Millennium celebrations were held throughout the world despite fears of major computer failures due to “Y2K” bug, fears that proved largely unwarranted President Bill Clinton proposed a $2 billion program to bring Internet access to lowincome houses Supreme Court gave police broad authority to stop and question people who run from a police officer The Millennium Summit among world leaders was held at the United Nations President Bill Clinton created the Giant Sequoia National Monument to protect 328,000 California acres of trees from timber harvesting Judge Thomas Penfield Jackson ruled that Microsoft violated the Sherman Antitrust Act by tying its Internet browser to its operating system George W. Bush was declared the winner of the presidential race in a highly controversial election against Al Gore The female-oriented television cable channel Oxygen made its debut Carlos Santana won eight Grammy awards, including Album of the Year for Supernatural

2005 • •

• • • •

Robotics_2.indd 452

George W. Bush was inaugurated in Washington, DC, for his second term as the fortythird president of the United States Demonstrators marched through Baghdad denouncing the U.S. occupation of Iraq, two years after the fall of Saddam Hussein, and rallied in the square where his statue had been toppled in 2003 The Superjumbo jet aircraft Airbus A380 made its first flight from Toulouse The Provisional IRA issued formally ordered an end to the armed campaign it had pursued since 1969, and ordering all its units to dump their arms The largest UN World Summit in history was held in New York City Scientists announced that they had created mice with small amounts of human brain cells in an effort to make realistic models of neurological disorders

10/2/2020 12:37:39 PM

Historical Snapshots | 453

2010 • • • • • • • •

• • • • • • •

The Tea Party movement hosted its first convention in Nashville, Tennessee President Obama established the National Commission on Fiscal Responsibility and Reform The US Navy officially announced that it would end its ban on women in submarines The District of Columbia’s same-sex marriage law went into effect At the 82nd Academy Awards, The Hurt Locker won six Oscars including the first Best Director award for a woman, Kathryn Bigelow President Obama signed the Patient Protection and Affordable Care Act into law aiming to insure 95 percent of Americans An explosion at the Deepwater Horizon oil rig killed 11 workers and sank the rig, initiating a massive offshore oil spill in the Gulf of Mexico, considered the largest environmental disaster in US history The Dodd-Frank Wall Street Reform and Consumer Protection Act was signed into law by President Obama Former US Solicitor General Elena Kagan was sworn in as Justice of the Supreme Court The last US combat troops left Iraq The San Francisco Board of Supervisors banned McDonald’s Happy Meal toys, citing obesity concerns WikiLeaks founder Julian Assange began releasing confidential U.S. diplomatic documents General Motors introduced the first Chevrolet Volt plug-in hybrid electric vehicle President Obama signed the Don’t Ask, Don’t Tell repeal into law

2015 • • • • • •

Robotics_2.indd 453

NASA’s Messenger spacecraft concluded its four-year orbital mission over Mercury Dzhokhar Tsarnaev was sentenced to death for the 2013 Boston Marathon bombing Cuba was officially removed from the US State Sponsors of Terrorism list Former Olympian Bruce Jenner became the first transgender person to appear on the cover of Vanity Fair magazine Rachel Dolezal resigned as president of the NAACP Spokane, Washington amid allegations that she claimed to be black but was actually white In a 6-3 decision, the Supreme Court upheld subsidies for the Patient Protection and Affordable Care Act (also known as Obamacare) nationwide

10/2/2020 12:37:39 PM

454 | Historical Snapshots • • • • • • •

The Supreme Court ruled that the Constitution guarantees a right to samesex marriage The South Carolina State House removed the Confederate battle flag from its grounds after weeks of protest, and placed it in a museum Birdman won four Oscars including Best Picture and Best Director President Obama announced the Clean Power Plan which included first-ever Environmental Protection Agency standards on carbon pollution from U.S. power plants Kim Davis, a clerk for Rowan County, Kentucky, was found in contempt of court and jailed for five days for refusing to issue marriage licenses to samesex couples President Obama ordered up to 50 U.S. special operations ground troops to be deployed in Syria to fight Islamic State militants Defense Secretary Ashton Carter announced that all combat roles in the military must be opened to women

2019 • •



• • •



Robotics_2.indd 454

SpaceIL launches the Beresheet probe, the world’s first privately financed mission to the Moon Fifty people are killed and fifty others injured in terrorist attacks, streamed live on Facebook, on two mosques in Christchurch, New Zealand, the deadliest shooting in the country’s modern history. Facebook disabled 1.5 million videos of the event At the 91st Academy Awards, Green Book won Best Picture and Best Supporting Actor for Mahershala Ali’s portrayal of Don Shirley; Bohemian Rhapsody led the ceremony with four awards, including Best Actor for Rami Malek’s portrayal of Freddie Mercury. Roma and Black Panther also received three awards apiece, with the former winning Best Director for Alfonso Cuarón and becoming the first Mexican submission to win Best Foreign Language Film. Olivia Colman was awarded Best Actress for portraying Anne, Queen of Great Britain in The Favourite The Overstory by Richard Powers wins the Pulitzer Prize for fiction A Texas Longhorn from Alabama sets Guinness world record with horns that are nearly 11 feet wide, longer than the Statue of Liberty’s face Venezuela enters a constitutional crisis as Juan Guaidó and the National Assembly declare incumbent President Nicolàs Maduro “illegitimate”; Maduro severs diplomatic ties with the United States and Columbia and expels the German ambassador President Donald Trump confirms that the U.S. will leave the Intermediate-Range Nuclear Forces Treaty of 1987; Russia follows with suspension of its obligations to the treaty

10/2/2020 12:37:39 PM

Historical Snapshots | 455 • •



• • • •







• •

• • •

Robotics_2.indd 455

Catholic Pope Francis is the first pontiff to visit the Arabian Peninsula and to change the church’s stance on the death penalty, arguing for its abolition An unmanned demonstration flight of the new crew capable version of the Space X Dragon spacecraft, intended to carry American astronauts into space, achieves successful autonomous docking with the International Space Station Ethiopian Airlines Flight 302, a Boeing 737 MAX 8 bound for Nairobi, crashes shortly after takeoff, killing all 157 passengers. Boeing 737 MAX 8 and MAX 9 models are subsequently grounded worldwide The final territory of the Islamic State of Iraq and the Levant (ISIL), located in Syria, is liberated Scientists from the Event Horizon Telescope project announce the first-ever image of a black hole, located in the center of the M87 galaxy NepaliSat-1 is launched, Nepal’s first ever research satellite to be sent into space A series of bomb attacks occur at eight locations in Sri Lanka, leaving at least 253 people dead and over 500 injured; this is the first major terrorist attack in the country since the Sri Lankan Civil War ended in 2009 Comedian Volodymyr Zelensky is elected President of Ukraine in a runoff election. Zelensky portrayed a fictional Ukrainian president in the television series Servant of the People The number of deaths from the Kivu Ebola outbreak exceeds 1,000, the second deadliest Ebola outbreak in history, only surpassed by the West African Ebola virus epidemic of 2013–2016 Taiwan’s parliament becomes the first in Asia to legalize same-sex marriage; Botswana decriminalizes homosexuality; the Supreme Court of Ecuador rules in favor of same-sex marriage British Prime Minister Theresa May announces her resignation in the wake of Brexit failure South Korean newspaper Chosun Ilbo reports that North Korea executed nuclear envoy Kim Hyok-choi and four other diplomats in March after the failed Hanoi summit with the United States, and that Kim Jong-Un’s top aide was sentenced to hard labor The 2019 FIFA Women’s World Cup, held in France, is won by the United States Several U.S. states pass highly controversial fetal heartbeat bills, which ban abortions as early as six weeks, before many women are even aware that they are pregnant President Donald Trump declares a national state of emergency to obtain funding for his border wall; the Supreme Court declares that he can use $2.5 billion in military funding to proceed with construction

10/2/2020 12:37:39 PM

456 | Historical Snapshots •

• •

• • • • • •

• • • • •

Robotics_2.indd 456

The Supreme Court blocks a citizenship question from being added to the 2020 census, which may have caused less people to respond to the census and ultimately misrepresent minority populations Rep. Alexandria Ocasio-Cortex releases blueprint for a new green deal to combat climate change that calls for a massive cut in carbon emissions Notable deaths in 2019 include: liberal Supreme Court Justice John Paul Stevens; African American artists Mavis Pusey and John Singleton; film director Franco Zeffirelli; boxers Pernell Whitaker and Eusebio Pedroze; fashion industry giants Gloria Vanderbilt and Karl Lagerfeld; scientist Murray Gell-Mann; and architect I.M Pei July 20 marked the 50th anniversary of the Apollo 11 moon landing Australian Kerry Robertson becomes the first person to use a new assisted dying program to end her life Saudi Arabia announces new rules for women, including independent travel without a male guardian’s permission WalMart announces that it will stop selling handguns and some ammunition in the wake of the El Paso WalMart shooting The Joker, directed by Todd Phillips and starring Joaquin Phoenix, wins the Golden Lion at the Venice Film Festival Poet John Milton’s annotated copy of Shakespeare’s First Folio (1623) is found in a Philadelphia library in what could be the modern world’s most important literary discovery Teen climate activist Greta Thunberg sails from Sweden to New York on an emissionsfree yacht and delivers an emotional speech to the United Nations about climate change House Speaker Nancy Pelosi announces formal impeachment inquiry into President Donald Trump The World Health Organization announces that 800,000 children in DR Congo will be vaccinated as the world’s largest measles epidemic claims thousands of lives The 2019 Nobel Peace Prize is awarded to Ethiopian Prime Minister Abiy Ahmed Amazon’s Jeff Bezos and MacKenzie Bezos agree to record-breaking $35 billion divorce settlement.

10/2/2020 12:37:39 PM

Historical Snapshots | 457

2020 • • •



• • • •



• • • •

Robotics_2.indd 457

Qasem Soleimani—one of Iran’s most senior military figures—was killed in a US drone strike. Iran pulled out of the 2015 nuclear deal, saying it will not limit its uranium enrichment. Golden Globes: 1917 Best Drama; Once Upon a Time in Hollywood Best Comedy/Musical; Best Actress award to Renee Zellweger for her performance in the drama Judy; Best Actor award to Joaquin Phoenix for his performance in The Joker; Best Actress award to Awkwafina for her performance in the comedy The Farewell; Best Actor award to Taron Edgerton for his performance in Rocketman. Duke and Duchess of Sussex (Prince Harry and Meghan Markel) announced their decision to “step back” as senior members of the Royal Family; referred to as “Sussexit” or “Megxit” in the international media. Australian bushfires death toll reached 27; millions of hectares of land have burned, with devastating ecological effects. In the process known as Brexit, the United Kingdom and Gibraltar formally withdrew from the European Union, beginning an 11-month transition period. President Donald Trump was acquitted on two articles of impeachment in February after a highly partisan Senate trial. The World Health Organization labeled the coronavirus outbreak a pandemic in March as confirmed cases outside China—where the epidemic broke out—tripled in a week. Parts of China and Italy—with the bulk of the worldwide death toll of 3,996—remained on lockdown. Film producer Harvey Weinstein is convicted of a criminal sex act and rape and sentenced to 23 years in prison after over 100 women made public allegations against him. Helicopter crash killed all nine passengers including basketball star Kobe Bryant, who was 41, and his 13-year-old daughter. Rare, circumbinary planet called TOI 1338-b was discovered. The International Criminal Court authorized the Afghanistan War Crimes Inquiry to proceed, allowing for the first time U.S. citizens to be investigated. In March share prices fell sharply around the world, in response to economic concerns and the impact of COVID-19. The Dow Jones Industrial Average plunged more than 2,000 points, its biggest ever fall in intraday trading. Oil prices plunged by as much as 30% in early trading, the biggest fall since 1991, after Saudi Arabia launched a price war.

10/2/2020 12:37:39 PM

458 | Historical Snapshots •

• • • • • • • • •

• •

• • •

• •

Robotics_2.indd 458

Notable deaths in 2020 include: Egyptian president Hosni Mubarak (ousted during the Arab Spring): American actor Kirk Douglas; Prozac Nation author Elizabeth Wurtzel; and British computer scientist Peter T. Kirstein. In June global COVID-19 confirmed cases exceed 10 million, with a worldwide death toll of more than 500,000. The death of George Floyd at the hands of police sparks nationwide riots and looting in the United States, cities impose curfews, and Los Angeles declares a state of emergency. The Supreme Court struck down a Louisiana law requiring abortion clinics to have admitting privileges at nearby hospitals. Princeton University announces it will remove Woodrow Wilson’s name from campus buildings because of his racist thinking. U.S president Donald Trump suspends funding of the World Health Organization. European Space Agency/JAXA space probe BepiColombo sets out for Venus; its final destination is Mercury by 2025. After criticism for the spread of coronavirus misinformation, Facebook removes “pseudoscience” and “conspiracy theory” as options for targeted ads. The Pentagon formally releases three videos of “unidentified aerial phenomena” encountered by U.S. Navy pilots. Radiocarbon and DNA analysis of fossils found in Bulgaria indicate they belong to Homo sapiens rather than Neanderthal, meaning that modern humans may have arrived in Europe thousands of years earlier than previously thought. Costa Rica becomes the first Central American country to legalize same-sex marriage. Private company SpaceX launches NASA astronauts to the International Space Station, the first manned spacecraft to take off from U.S. soil since the retirement of the Space Shuttle in 2011. Russian president Vladimir Putin declares a state of emergency after 20,000 tons of oil leaked into the Ambamaya River near the Siberian city of Nonisk. Iran shot down Ukrainian airliner PS752, killing all 176 people on board, during a series of escalations that nearly brought Iran and the United States to war. The 2020 Summer Olympics, scheduled to take place in Tokyo, were postponed until 2021 due to the coronavirus. Other major sporting events worldwide have been cancelled or postponed. South Korean film Parasite unexpectedly won the Oscar for Best Film. An explosion of stored ammonium nitrate exploded in Beirut, Lebanon, leaving 190 dead, 6,500 injured, and an estimated 300,000 people homeless. The Lebanese government

10/2/2020 12:37:39 PM

Historical Snapshots | 459





• • •

• • • • •

• • • • •

Robotics_2.indd 459

resigned amid public anger. A locust infestation continued to threaten food supplies in East Africa and other regions, the worst outbreak in 70 years in Kenya and the worst in 25 years in Ethiopia, Somalia, and India. The United States passed the Hong Kong Autonomy Act imposing sanctions on individuals and entities that materially contribute to the Chinese central government’s suppression of the Hong Kong democracy movement. Joe Biden was officially selected as the 2020 Democratic presidential candidate, running against Republican incumbent Donald Trump. A surge in mail-in ballots for the November 2020 US presidential election is expected, raising concerns over voter fraud and US Post Office resources. Senator Kamala Harris is selected as the Democratic vice presidential candidate, becoming the third woman to run for vice president and the first Black woman and the first person of Indian descent. Rumors circulated that North Korean president Kim Jong-Un was gravely ill or dead, but photos were later released by state media of Kim at a ribbon-cutting ceremony. A bitcoin scam allegedly run by Florida teen Graham Clark took over the Twitter accounts of Joe Biden, Bill Gates, Elon Musk, and Kanye West, among others. Flooding of the Brahmaputra River kills 189 and leaves 4 million homeless in India and Nepal. Russian voters back a constitutional amendment that will allow president Vladimir Putin to seed two additional six-year terms when his current term ends in 2024. Asian giant hornets, known as “murder hornets,” were seen for the first time in the United States in Washington state; the hornets can wipe out entire bee colonies within hours. Russian president Vladimir Putin announced that Russia has approved the world’s first COVID-19 vaccine. Israel and the UAE agree to normalize relations, marking the third Israel-Arab peace deal. Amazon CEO Jeff Bezos becomes the first person in history to have a net worth exceeding $200 billion, according to Forbes. Japanese Prime Minister Shinzo Abe, the longest-serving prime minister in Japan’s history, announced his resignation due to ill health. As of August worldwide cases of coronavirus reached 21.5 million, with 844 thousand deaths; in the United States 6 million cases were reported, with 183 thousand deaths.

10/2/2020 12:37:39 PM

Robotics_2.indd 460

10/2/2020 12:37:40 PM

BIBLIOGRAPHY “A Brief History of WiFi: Then and Now.” EERO. The Download. 27 Sept. 2016, blog.eero.com/abrief-history-of-wifi-then-and-now/. Allison, Peter Ray. “What Does a Bomb Disposal Robot Actually Do?” BBC. Future. 15 July 2016, www.bbc.com/future/article/20160714-what-does-a-bomb-disposal-robot-actually-do. Anderson, Janna, and Lee Rainie. “Artificial Intelligence and the Future of Humans.” Pew Research Center. 10 Dec. 2018, www.pewresearch.org/internet/2018/12/10/solutions-toaddress-ais-anticipated-negative-impacts/. Anderson, Mark Robert. “After 75 Years, Isaac Asimov’s Three Laws of Robotics Need Updating.” The Conversation. 17 Mar. 2017, theconversation.com/after-75-years-isaac-asimovs-threelaws-of-robotics-need-updating-74501. Anderson, Mark Robert. “Twenty Years on from Deep Blue vs. Kasparov: How a Chess Match Started the Big Data Revolution.” The Conversation. 11May 2017, theconversation.com/ twenty-years-on-from-deep-blue-vs-kasparov-how-a-chess-match-started-the-big-datarevolution-76882. Angwin, Julia, Jeff Larson, Surya Mattu, and Lauren Kirchner. “Machine Bias.” ProPublica. 23 May 2016, www.propublica.org/article/machine-bias-risk-assessments-in-criminalsentencing. “Animals Think, Therefore . . .” The Economist. 12 Dec. 2015, www.economist.com/news/ essays/21676961-inner-lives-animals-are-hard-study-there-evidence-they-may-be-lot-richerscience-once-thought. Anyoha, Rockwell. “The History of Artificial Intelligence.” SITN. Science in the News. 28 Aug 2017, sitn.hms.harvard.edu/flash/2017/history-artificial-intelligence/. Asimov, Isaac. “Reason.” Addsdonna. addsdonna.com/old-website/ADDS_DONNA/Science_ Fiction_files/2_Asimov_Reason.pdf. Asimov, Isaac. “Runaround.” Williams College. web.williams.edu/Mathematics/sjmiller/public_ html/105Sp10/handouts/Runaround.html. Atherton, Kelsey D. “A Brief History of the Demise of Battle Bots.” Popular Science. 19 June 2013, www.popsci.com/technology/article/2013-06/elaborate-history-how-wedges-ruinedbattlebots/. “‘Audrey’—New Electronic Device Developed by Bell Labs.” Signals. vol. 7, 1953. Averbakh, Yuri. A History of Chess: From Chaturanga to the Present Day. Russell Enterprises, Inc., 2012.

Robotics_2.indd 461

10/2/2020 12:37:40 PM

462 | Bibliography “Babbage Machine.” New Ulm Review. Brown County, MN. 30 Sept 1896. Chronicling America: Historic American Newspapers. LOC, Library of Congress, chroniclingamerica.loc.gov/lccn/ sn89081128/1896-09-30/ed-1/seq-7/. “Babbage’s Analytical Engine, 1834–1871, Trial Model.” Science Museum Group. 2020, collection. sciencemuseumgroup.org.uk/objects/co62245/babbages-analytical-engine-1834-1871-trialmodel-analytical-engines. “Backgrounder on the Three Mile Island Accident.” USNRC. United States Nuclear Regulatory Commission. 21 June 2018. www.nrc.gov/reading-rm/doc-collections/fact-sheets/3mile-isle. html. Ball, Philip. “Quantum Physics May Be Even Spookier Than You Think.” Scientific American. 21 May 2018, www.scientificamerican.com/article/quantum-physics-may-be-even-spookierthan-you-think/. Ball, Philip. “Quantum Physics May Be Even Spookier Than You Think.” Scientific American. 21 May 2018, www.scientificamerican.com/article/quantum-physics-may-be-even-spookierthan-you-think/. Barlow, Alfred. The History and Principles of Weaving by Hand and by Power. London: Sampson Low, Marston, Searle & Rivington, 1884. Bartels, Meghan. “Curiosity Snaps Epic ‘Selfie’ on Mars as It Moves on to New Martian Sights.” Space. 29 Jan. 2019, www.space.com/43150-mars-curiosity-selfie-leaving-vera-rubin-ridge. html. Beard, Mary. How Do We Look: The Body, the Divine, and the Question of Civilization. Liveright Publishing, 2018. Beiser, Vince. “The Robot Assault on Fukushima.” Wired. 26 Apr. 2018, www.wired.com/story/ fukushima-robot-cleanup/. Beley, Gene. “The Quasar Industries’ Robot.” Modern Mechanix. Apr. 1978, blog.modernmechanix. com/the-quasar-industries-robot/. Bellis, Mary. “The History of the UNIVAC Computer.” Thought Co., Dotdash Publishing, 5 Mar. 2019, www.thoughtco.com/the-history-of-the-univac-computer-1992590. Bernardi, Gabriella. “AI Says We Can Teleport Qubits.” Cosmos. 25 June 2019, cosmosmagazine. com/technology/thanks-to-ai-we-know-we-can-teleport-qubits-in-the-real-world/. Beyer, Kurt. Grace Hopper and the Invention of the Information Age. MIT P, 2012. Biever, Celeste. “Celebrations as Higgs Boson Is Finally Discovered.” NewScientist. 4 July 2012, www.newscientist.com/article/dn22014-celebrations-as-higgs-boson-is-finally-discovered/. Binfield, Kevin. Writings of the Luddites. Johns Hopkins UP, 2004.

Robotics_2.indd 462

10/2/2020 12:37:40 PM

Bibliography | 463 Boden, Margaret A. Mind as Machine: A History of Cognitive Science. Clarendon P, 2006. Braman, Lisa. “Cooking Through the Ages: A Timeline of Oven Inventions.” Smithsonian. 18 Nov. 2011, www.smithsonianmag.com/arts-culture/cooking-through-the-ages-a-timeline-ofoven-inventions-380050/. Brouwers, Josho. “Talos, the Bronze Guardian.” Ancient World Magazine. 1 Oct. 2019, www. ancientworldmagazine.com/articles/talos-bronze-guardian/. Buchanan, Matt. “Object of Interest: The PackBot.” The New Yorker. Condé Nast. 23 Apr. 2013, www.newyorker.com/tech/annals-of-technology/object-of-interest-the-packbot. Campbell-Kelly, Martin. Computer: A History of the Information Machine. Routledge, 2014. Byford, Sam. “Colossus: How the First Programmable Electronic Computer Saved Countless Lives.” The Verge, Vox Media, 12 Mar. 2012, www.theverge.com/2012/3/12/2864068/colossusfirst-programmable-electronic-computer. “Calculator Is Fast as Light.” Laurel Outlook, 1 May 1946. Chronicling America: Historic American Newspapers. LOC., Library of Congress. chroniclingamerica.loc.gov/lccn/sn86075258/194605-01/ed-1/seq-9. “Callistratus.” Theoi Project. Classical Texts Library. 2017, www.theoi.com/Text/Callistratus. html. Cangelosi, Angelo, and Matthew Schlesinger. Developmental Robotics: From Babies to Robots. MIT P, 2015. Čapek, Karel. “R.U.R. (Rossum’s Universal Robots).” 1920. Project Gutenberg. 22 Mar. 2019, www.gutenberg.org/files/59112/59112-h/59112-h.htm. Carey, Bridget. “My Week with Aibo: What It’s Like to Live with Sony’s Robot Dog.” CNET. 28 Nov. 2018, www.cnet.com/news/my-week-with-aibo-what-its-like-to-live-with-sonys-robotdog/. Carper, Steve. Robots in American Popular Culture. McFarland & Company Inc., 2019. Carter, Jimmy. “Nobel Lecture.” 10 Dec. 2002, The Nobel Prize, www.nobelprize.org/prizes/ peace/2002/carter/lecture/. Cartwright, Jon. “Collapse: Has Quantum Theory’s Greatest Mystery Been Solved?” NewScientist. 13 July 2016, www.newscientist.com/article/mg23130820-200-collapse-has-quantum-theorysgreatest-mystery-been-solved/. Cartwright, Mark. “Prometheus.” Ancient History Encyclopedia. 20 Apr. 2013, www.ancient.eu/ Prometheus/. Cassel, David. “Remembering Shakey, the First Intelligent Robot.” New Stack. 5 Mar. 2017, thenewstack.io/remembering-shakey-first-intelligent-robot/.

Robotics_2.indd 463

10/2/2020 12:37:40 PM

464 | Bibliography Catalan, Cristobal. “Metropolis at 90: You’ll Never See a Movie Like This Again.” FEE, Foundation for Economic Education, 4 Nov. 2017, fee.org/articles/metropolis-at-90-you-ll-never-see-amovie-like-this-again/. “Chernobyl Accident 1986.” World Nuclear Association. Apr. 2020, www.world-nuclear.org/ information-library/safety-and-security/safety-of-plants/chernobyl-accident.aspx. Church, Zach. “Professor Emeritus Jay Forrester, Digital Computing and System Dynamics Pioneer, Dies at 98.” MIT News. 19 Nov. 2016, news.mit.edu/2016/professor-emeritus-jayforrester-digital-computing-system-dynamics-pioneer-dies-1119. “Colossus.” Crypto Museum. 24 Feb. 2018, www.cryptomuseum.com/crypto/colossus/index.htm. “Colossus—the Computer Built to Break Hitler’s Codes.” The Telegraph. 9 Dec. 2016, www. telegraph.co.uk/technology/connecting-britain/colossus-bletchley-computer-broke-hitlercodes/. Connor, Roger. “The Predator, a Drone That Transformed Military Combat.” Smithsonian. 9 Mar. 2018, airandspace.si.edu/stories/editorial/predator-drone-transformed-military-combat. Corke, Peter. Robotics, Vision and Control: Fundamental Algorithms in MATLAB. Springer, 2011. Cortes, Jan. “‘Star Wars’ Droids and the Science Behind Why They’re Adorable.” Medical Daily. 18 Dec. 2019, www.medicaldaily.com/star-wars-droids-science-behind-why-adorable-447264. Coyne, Jerry. “Yes, There Is a War between Science and Religion.” The Conversation. 21 Dec. 2018, theconversation.com/yes-there-is-a-war-between-science-and-religion-108002. Da Costa, Pedro Nicolaci. “A Technology Gap between the Rich and Poor Is Deepening US Inequality.” Business Insider. 4 May 4 2017. www.businessinsider.com/technology-gapdeepening-us-inequality-2017-5. Dara, Rozita. “The Dark Side of Alexa, Siri and Other Personal Digital Assistants.” The Conversation. 15 Dec. 2019, /theconversation.com/the-dark-side-of-alexa-siri-and-otherpersonal-digital-assistants-126277. Dave, Paresh. “Hitchhiking Robot That Made It across Canada Maimed on U.S. Road Trip.” The Los Angeles Times. 3 Aug. 2015, www.latimes.com/business/technology/la-fi-tn-hitchbotdestroyed-20150803-story.html. Davidson, Paul. “Automation Could Kill 73 Million U.S. Jobs by 2030.” USA Today. 28 Nov. 2017, www.usatoday.com/story/money/2017/11/29/automation-could-kill-73-million-u-sjobs-2030/899878001/. Dinello, Daniel. Technophobia!: Science Fiction Visions of Posthuman Technology. U of Texas P, 2005. Dirks, Jerald. The Abrahamic Faiths: Judaism, Christianity, and Islam: Similarities and Contrasts. Amana Publications, 2004.

Robotics_2.indd 464

10/2/2020 12:37:40 PM

Bibliography | 465 “Drone Warfare.” The Bureau of Investigative Journalism. 2020, www.thebureauinvestigates. com/projects/drone-war. Dryer, Ben. “Explainer: How Hostile Is Space?” The Conversation.6 Aug. 2014, theconversation. com/explainer-how-hostile-is-space-27903. Du Sautoy, Marcus. “True AI Creativity Is Coming and Will Reveal the Minds of Machines.” NewScientist. 8 May 2019, www.newscientist.com/article/mg24232292-000-true-ai-creativityis-coming-and-will-reveal-the-minds-of-machines/. Dvorsky, George. “The Most Unfortunate Design Flaws in the Human Body.” Gizmodo. 7 Feb. 2014, io9.gizmodo.com/the-most-unfortunate-design-flaws-in-the-human-body-1518242787. Earnest, Les. “Stanford Cart.” Stanford University. Dec. 2012, web.stanford.edu/~learnest/sail/ oldcart.html. Edwards, Benj. “7 Robots Every Geeky 80s Kid Wanted.” PC. 23 July 2015, www.pcmag.com/ news/7-robots-every-geeky-80s-kid-wanted. Eisenberg, Anne. “What’s Next: Wired to the Brain of a Rat, a Robot Takes On the World.” The New York Times. 15 May 2003, www.nytimes.com/2003/05/15/technology/what-s-next-wiredto-the-brain-of-a-rat-a-robot-takes-on-the-world.html. “The ELIZA Effect.” 99 Percent Invisible. 10 Dec. 2019, 99percentinvisible.org/episode/the-elizaeffect/. Emerging Technology from the arXiv. “Do We Need Asimov’s Laws?” MIT Technology Review. 16 May 2014, www.technologyreview.com/2014/05/16/172841/do-we-need-asimovs-laws/. Eisenberg, Anne. “What’s Next: Wired to the Brain of a Rat, a Robot Takes On the World.” The New York Times. 15 May 2003, www.nytimes.com/2003/05/15/technology/what-s-next-wiredto-the-brain-of-a-rat-a-robot-takes-on-the-world.html. Engber, Daniel. “The Neurologist Who Hacked His Brain—And Almost Lost His Mind.” Wired. Condé Nast. 26 Jan. 2016, www.wired.com/2016/01/phil-kennedy-mind-control-computer/. Engelberger, Joseph F. Robotics in Service. The MIT P, 1989. Essinger, James. Jacquard’s Web: How a Hand-loom Led to the Birth of the Information Age. Oxford UP, 2004. Estes, Adam Clark. “The Terrible Truth about Alexa.” Gizmodo. 27 Apr. 2019, gizmodo.com/theterrible-truth-about-alexa-183405404#. Everett, H.R. “A Brief Early History of Unmanned Systems.” Mechanix Illustrated. 2016. mechanixillustrated.technicacuriosa.com/2017/02/19/a-brief-early-history-of-unmannedsystems/. Everett, H.R. Unmanned Systems of World War I and II. The MIT P, 2015.

Robotics_2.indd 465

10/2/2020 12:37:40 PM

466 | Bibliography Evon, Dan. “Was a Mars Rover’s Final Message to NASA ‘My Battery Is Low and It’s Getting Dark?’” Snopes. 18 Feb. 2019, www.snopes.com/fact-check/opportunity-rovers-final-words/. Ewalt, David M. “When Omnibot Was Hot.” Forbes. 1 Nov. 2011, www.forbes.com/sites/ davidewalt/2011/11/01/when-omnibot-was-hot/#45fb2f0e1147. Fava-Verde, Jean-François. Silk and Innovation: The Jacquard Loom in the Age of the Industrial Revolution. Histancia, Ltd, 2011. Friedel, Frederic. “Reconstructing Turing’s ‘Paper Machine.’” Chessbase. 23 Sept. 2017, en.chessbase.com/post/reconstructing-turing-s-paper-machine. Gabbatt, Adam. “Hitchbot’s Decapitators Avoided Capture by the Hitchhiking Android’s Cameras.” The Guardian. 3 Aug. 2015, www.theguardian.com/technology/2015/aug/03/ hitchbot-hitchhiking-robot-destroyed-philadelphia. Gehring, Wes D. Film Clowns of the Depression. McFarland & Company, Inc. Publishers, 2007. Gibbs, Samuel. “Elon Musk Leads 116 Experts Calling for Outright Ban of Killer Robots.” The Guardian. 20 Aug. 2017, www.theguardian.com/technology/2017/aug/20/elon-musk-killerrobots-experts-outright-ban-lethal-autonomous-weapons-war. Gill, Victoria. “Nasa’s Voyager 2 Probe ‘Leaves the Solar System.’” BBC News. 10 Dec. 2018, www.bbc.com/news/science-environment-46502820. Goodwin, Antuan. “Honda Debuts Four Very Different Concepts at CES.” CNET. Road Show. 9 Jan. 2018, www.cnet.com/roadshow/news/honda-3e-robotics-concepts/. Goss, Jennifer L. “Henry Ford and the Auto Assembly Line.” Thought Co., Dotdash Publishing, 23 Jan. 2020, www.thoughtco.com/henry-ford-and-the-assembly-line-1779201. Gralla, Preston. “Cortana, Say Goodbye.” Computer World. 9 Mar. 2020, www.computerworld. com/article/3531417/cortana-say-goodbye.html. Hall, R. Cargill. Lunar Impact: The NASA History of Project Ranger. Dover Publications, 2010. Hamilton, Jon. “Myth Busting: The Truth about Animals and Tools.” NPR. 23 Dec 2011. https:// www.npr.org/2011/12/23/143833929/myth-busting-the-truth-about-animals-and-tools. Harland, David M. NASA’s Moon Program: Paving the Way for Apollo 11. Springer, 2009. Healy, Kieran. “America Is a Violent Country.” The Washington Post. 8 Nov. 2018, www. washingtonpost.com/news/monkey-cage/wp/2017/10/03/america-is-a-violent-country/. Hern, Alex. “Apple Contractors ‘Regularly Hear Confidential Details’ on Siri Recordings.” The Guardian. 26 July 2019, www.theguardian.com/technology/2019/jul/26/apple-contractorsregularly-hear-confidential-details-on-siri-recordings. Hernandez, Daniela. “Before the iPad, There Was the Honeywell Kitchen Computer.” Wired. 22 Nov. 2012, www.wired.com/2012/11/kitchen-computer/.

Robotics_2.indd 466

10/2/2020 12:37:40 PM

Bibliography | 467 “History of the Cylinder Phonograph.” LOC. Library of Congress. 2020, www.loc.gov/collections/ edison-company-motion-pictures-and-sound-recordings/articles-and-essays/history-ofedison-sound-recordings/history-of-the-cylinder-phonograph/#. Hodges, Andrew. Alan Turing: The Enigma. Princeton UP, 1983. Holland, Owen. “The First Biologically Inspired Robots.” Robotica, vol. 21, no. 4, 2003, pp. 351– 63, Cambridge Core, doi: 10.1017/S0263574703004971. Homer, The Iliad. Translated by William Cowper. Gutenberg Project. 5 Aug. 2005, www.gutenberg. org/files/16452/16452-h/16452-h.htm. Homer, The Odyssey. Translated by Samuel Butler. Gutenberg Project. Apr. 1999, www.gutenberg. org/files/1727/1727-h/1727-h.htm. “How High Is Up?” Evening Star. Washington DC, 8 Sept. 1946. Chronicling America: Historic American Newspapers. LOC, Library of Congress. chroniclingamerica.loc.gov/lccn/ sn83045462/1946-09-08/ed-1/seq-83/. “How Is Automation Perceived by the Public?” World Economic Forum. 11 Feb. 2020, www. weforum.org/agenda/2020/02/public-opinion-4ir-automation-ai. “How Language Developed: Comprehension Learning Precedes Vocal Production.” Science Daily. 27 May 2019, www.sciencedaily.com/releases/2019/05/190527111733.htm. “How Robots Change the World.” Oxford Economic. June 2019, cdn2.hubspot.net/hubfs/2240363/ Report%20-%20How%20Robots%20Change%20the%20World.pdf?utm. Howell, Elizabeth. “Sojourner: Roving on Mars for the First Time.” Space. 24 Sept. 2012, www. space.com/17745-mars-pathfinder-sojourner-rover.html. Hutchins, John W. Early Years in Machine Translation: Memoirs and Biographies of Pioneers. John Benjamins Publishing Company, 2000. “The Impact of Robotic Process Automation on the Automotive Industry.” Medium. 12 Mar. 2018, medium.com/@cigen_rpa/the-impact-of-robotic-process-automation-on-the-automotiveindustry-6a0eabaf9ee1. “Isaac Asimov (1920–1992)” Jewish Virtual Library. 2020, www.jewishvirtuallibrary.org/isaacasimov. Ivanhoe, Philip J., and Bryan W. Van Norden. Readings in Classical Chinese Philosophy. Hackett Publishing, 2001. “John F. Kennedy Moon Speech—Rice Stadium.” 12 Sept. 1962. NASA. 2020, er.jsc.nasa.gov/seh/ ricetalk.htm. Jordan, John M. “The Czech Play That Gave Us the Word ‘Robot.’” The MIT Press Reader. 29 July 2019, thereader.mitpress.mit.edu/origin-word-robot-rur./ Kalan, Elliott. “The Original Futurama: The Legacy of the 1939 World’s Fair.” Popular Mechanics.

Robotics_2.indd 467

10/2/2020 12:37:40 PM

468 | Bibliography Hearst Magazine Media. 11 Mar. 2010, www.popularmechanics.com/technology/design/ a5322/4345790/. “Karel Capek.” History-Computer. history-computer.com/People/KarelCapekBio.html. Kehe, Jason. “How FX Wizards Brought Interstellar’s Strange Bots to Life.” Wired. 20 Nov. 2014, www.wired.com/2014/11/interstellar-droids/. Kerrigan, Saoirse. “The History Behind the 15 Kitchen Appliances That Changed the Way We Live.” Interesting Engineering. 13 Apr. 2018, interestingengineering.com/the-history-behindthe-15-kitchen-appliances-that-changed-the-way-we-live. King, Barbara J. “For How Long Have We Been Human?” NPR. 13 Sept 2012, www.npr.org/ sections/13.7/2012/09/11/160934187/for-how-long-have-we-been-human. King, Martin Luther, Jr. “Remaining Awake Through a Great Revolution.” Stanford. King Institute. kinginstitute.stanford.edu/king-papers/publications/knock-midnight-inspirationgreat-sermons-reverend-martin-luther-king-jr-10. Kleinfield, N.R. “Developer Calls It a Robot Butler; Scoffers Say Screws Are Loose.” The New York Times. 5 Dec. 1977, www.nytimes.com/1977/12/05/archives/developer-calls-it-a-robotbutler-scoffers-say-screws-are-loose.html. Kurzweil, Ray. The Singularity Is Near. Penguin Books, 2005. Lafranco, Anthony R., Andres E. Castellanos, Jaydev P. Desai, and William C. Meyers. “Robotic Surgery: A Current Perspective.” Annals of Surgery. Jan. 2004, vol. 239, no. 1, pp. 14–21, doi: 10.1097/01.sla.0000103020.19595.7d. Lampe, David. “Robot Sentries.” Popular Science, vol. 227, no. 2 , Aug. 1985, books.google.com/ books?id=ewAAAAAAMBAJ&pg=PA20&lpg=PA20&dq=david+lampe,#v=onepage&q=david%20 lampe%2C&f=false. Laskow, Sarah. “A Machine That Made Stockings Helped Kick Off the Industrial Revolution.” Atlas Obscura. 19 Sept. 2017, atlasobscura.com/articles/machine-silk-stockings-industrialrevolution-queen-elizabeth. Leary, Warren E. “Robot Completes Volcano Exploration.” The New York Times. 3 Aug. 1994, www.nytimes.com/1994/08/03/us/robot-completes-volcano-exploration.html. Lee, Dami. “Double’s New Telepresence Robot Now Drives You around Like You’re a Sim.” The Verge. 27 Aug. 2019, www.theverge.com/2019/8/27/20835149/double-3-telepresence-robotmixed-reality. Lenzo, Kevin, Paul Placeway, Kristie Seymore, and Matthew A. Siegler. “Speech Recognition: Past, Present, and Future.” Carnegie Mellon University, www.cs.cmu.edu/afs/cs/user/ msiegler/www/ASR/futureofcmu-final.html. Levine, Stuart R. “Diversity Confirmed to Boost Innovation and Financial Results.” Forbes. 15 Jan. 2020, www.forbes.com/sites/forbesinsights/2020/01/15/diversity-confirmed-to-boost-

Robotics_2.indd 468

10/2/2020 12:37:41 PM

Bibliography | 469 innovation-and-financial-results/#11e56cbcc4a6. Levy, Steven. “The Brief History of the ENIAC Computer.” Smithsonian. Nov. 2013, www. smithsonianmag.com/history/the-brief-history-of-the-eniac-computer-3889120/. Lewis-Kraus, Gideon. “Check In with the Velociraptor at the World’s First Robot Hotel” Wired. 2 Mar. 2016, www.wired.com/2016/03/robot-henn-na-hotel-japan/. Libby, Matthew. “The Brain Across the Table: Garry Kasparov vs. Deep Blue, 1997.” Medium. 9 June 2019, medium.com/@matthewlibby_75648/the-brain-across-the-table-garry-kasparovvs-deep-blue-1997-7904f77cebf7. “Lighthill Report: Artificial Intelligence: A Paper Symposium.” Chilton Computing. 1973, pdfs. semanticscholar.org/b586/d050caa00a827fd2b318742dc80a304a3675.pdf. Lincoln, Abraham. “The Gettysburg Address.” Abraham Lincoln Online. 2020. www. abrahamlincolnonline.org/lincoln/speeches/gettysburg.htm. “Logic Theorist.” History-Computer. history-computer.com/ModernComputer/Software/ LogicTheorist.html. Loiseau, Jean-Christophe. “Rosenblatt’s Perceptron, the First Modern Neural Network.” Medium. 11Mar. 2019, towardsdatascience.com/rosenblatts-perceptron-the-very-first-neural-network37a3ec09038a. Lovering, Daniel. “Radioactive Robot: The Machines That Cleaned Up Three Mile Island.” Scientific American. 27 Mar. 2009, www.scientificamerican.com/article/three-mile-islandrobots/. Madrigal, Alexis C. “Meet the Cute, Wellies-Wearing Wikipedia-Reading Robot That’s Going to Hitchhike across Canada.” The Atlantic. 12 June 2014, www.theatlantic.com/technology/ archive/2014/06/meet-the-cute-wellies-wearing-robot-thats-going-to-hitchhike-acrosscanada/372677/. Madrigal, Alexis C. “The Man Who First Said ‘Cyborg,’ 50 Years Later.” The Atlantic. 30 Sept. 2010, www.theatlantic.com/technology/archive/2010/09/the-man-who-first-said-cyborg-50years-later/63821/. Maly, Tim. “The Emancipated Cyborg.” The Atlantic. 6 Oct. 2010, www.theatlantic.com/ technology/archive/2010/10/the-emancipated-cyborg/63995/. Mann, Adam. “The Mars Rovers.” Space. 28 Feb. 2019, www.space.com/mars-rovers.html. Markushin, Yury. “5 Chess Myths That Most People Believe.” The Chess World., 19 Oct. 2015, thechessworld.com/articles/general-information/5-chess-myths-that-most-people-believe/. “Mars Pathfinder.” NASA. NASA Science. MARS Exploration Program. 2019, mars.nasa.gov/ mars-exploration/missions/pathfinder/. “Mary Wollstonecraft Shelley.” Poetry Foundation. 2020, www.poetryfoundation.org/poets/mary-

Robotics_2.indd 469

10/2/2020 12:37:41 PM

470 | Bibliography wollstonecraft-shelley. “Mary Wollstonecraft.” Stanford University. Center for the Study of Language and Information. 2016, plato.stanford.edu/entries/wollstonecraft/. Mathas, Carolyn. “The Basics of Quantum Computing—A Tutorial.” EDN. 13 Aug. 2019, www. edn.com/the-basics-of-quantum-computing-a-tutorial/. Maxwell, Jim. “Amateur Radio: 100 Years of Discovery.” ARRL. American Radio Relay League. Jan. 2000, www.arrl.org/files/file/About%20ARRL/Ham_Radio_100_Years.pdf. McCulloch, Warren S., and Walter Pitts. “A Logical Calculus of the Ideas Immanent In Nervous Activity.” Bulletin of Mathematical Biology, vol. 5, pp. 115-33, 1943, Springer Nature, doi. org/10.1007/BF02478259. McNamee, David. “Are Robots the Future of Surgery, or a Pricey Marketing Gimmick?” Medical News Today. 1 Aug. 2014, www.medicalnewstoday.com/articles/280518. McWhorter, John. “Why ‘Mom’ and ‘Dad’ Sound So Similar in So Many Languages.” The Atlantic. 12 Oct. 2015, www.theatlantic.com/international/archive/2015/10/words-mom-dad-similarlanguages/409810/. “MER Tools of the Trade.” Exploratorium. Museum of Science. 2020, www.exploratorium.edu/ mars/mertools.php. Metz, Rachel. “Rise of the Robot Security Guards.” MIT Technology Review. 13 Nov. 2014, www. technologyreview.com/2014/11/13/170454/rise-of-the-robot-security-guards/. Meyer, Stephen. Manhood on the Line: Working-Class Masculinities in the American Heartland. U of Illinois, 2016. Minsky, Marvin. “Telepresence.” Omni Magazine. June 1980, web.media.mit.edu/~minsky/ papers/Telepresence.html. Moskowitz, Marc L. Go Nation: Chinese Masculinities and the Game of Weiqi in China. U of California P, 2013. Moskvitch, Katia. “The Machines That Learned to Listen.” BBC Future, BBC. 15 Feb. 2017, www.bbc.com/future/article/20170214-the-machines-that-learned-to-listen. Muoio, Danielle. “Google’s AI Is So Advanced It Beat a Human at a Game No Computer Has Ever Mastered.” Business Insider, 28 Jan. 2016, www.businessinsider.com/googles-artificialintelligence-beats-human-at-go-2016-1. Muoio, Danielle. “Why Go Is So Much Harder for AI to Beat Than Chess.” Business Insider, 10 Mar. 2016, www.businessinsider.com/why-google-ai-game-go-is-harder-than-chess-2016-3. Nakanishi, Toyoki. “For US Autoworkers, Robots Are the Job Killers, Not Trade.” Nikkei Asian Review. 19 June 2017, asia.nikkei.com/Economy/For-US-autoworkers-robots-are-the-jobkillers-not-trade.

Robotics_2.indd 470

10/2/2020 12:37:41 PM

Bibliography | 471

Needham, Joseph, and Colin A. Ronan. The Shorter Science and Civilization in China. Cambridge UP, 1978. “New Navy Device Learns by Doing: Psychologist Shows Embryo of Computer Designed to Read and Grow Wiser.” New York Times. 8 July 1958, www.nytimes.com/1958/07/08/archives/newnavy-device-learns-by-doing-psychologist-shows-embryo-of.html. Newman, Jared. “The 1950s Toy Robot Sensation That Time Forgot.” Fast Company. 2 Dec. 2016, www.fastcompany.com/3066169/the-toy-robot-sensation-that-time-forgot. Nguyen, Tuan C. “History of Self Driving Cars.” Thought Co. Dot Dash. 30 June 2019, www. thoughtco.com/history-of-self-driving-cars-4117191. Nichols, John. The Gentleman’s Magazine and Historical Chronicle, vol. 103, no. 2. 1833, p. 212, babel.hathitrust.org/cgi/pt?id=mdp.39015027525602&view=1up&seq=234&q1=silk%20 stockings. Nikiforuk, Andrew. “The Big Shift Last Time: From Horse Dung to Car Smog.” The Tyee. 6 Mar. 2013. https://thetyee.ca/News/2013/03/06/Horse-Dung-Big-Shift/. Nova, Annie, and John W. Schoen. “Automation Threatening 25% of Jobs in the US, Especially the ‘Boring and Repetitive’ Ones: Brookings Study.” CNBC. 25 Jan. 2019. https://www.cnbc. com/2019/01/25/these-workers-face-the-highest-risk-of-losing-their-jobs-to-automation.html. Novak, Matt. “Recapping ‘The Jetsons’: Episode 08—Rosey’s Boyfriend.” Smithsonian Magazine. 13 Nov. 2012, www.smithsonianmag.com/history/recapping-the-jetsons-episode-08-roseysboyfriend-120640948/. Novet, Jordan, and Adam Isaak. “Why Apple’s Siri Isn’t as Smart as Amazon Alexa and Google Assistant.” CNBC. 5 June 2019, www.cnbc.com/2019/06/04/why-siri-is-not-as-smart-as-alexagoogle-assistant.html. Obringer, Lee Ann, and Jonathan Strickland. “How ASIMO Works.” How Stuff Works, science. howstuffworks.com/asimo.htm. O’Connell, Sanjida. “What the Tortoise Taught Us.” The Guardian. 6 Dec. 2000, www.theguardian. com/science/2000/dec/07/robots. O’Hanlon, Michael E. “Is US Defense Spending Too High, Too Low, or Just Right?” Brookings Institute. 15 Oct. 2019, www.brookings.edu/policy2020/votervital/is-us-defense-spending-toohigh-too-low-or-just-right/. “Omnibot Grows Up.” Popular Science. Apr. 1986, books.google.com/ b o o k s ? i d = 9 6 D n l r V h z D s C & l p g = PA 8 4 - I A 3 & d q = o m n i b o t % 2 0 g r o w s % 2 0 u p & a s _ pt=MAGAZINES&pg=PA84-IA3#v=onepage&q&f=false. Oppy, Graham, and Dowe, David, “The Turing Test,” edited by Edward N. Zalta. The Stanford

Robotics_2.indd 471

10/2/2020 12:37:41 PM

472 | Bibliography Encyclopedia of Philosophy, (Spring 2019 Edition), plato.stanford.edu/archives/spr2019/ entries/turing-test/. Oreck, Alden. “Modern Jewish History: The Golem.” Jewish Virtual Library. 2020, www. jewishvirtuallibrary.org/the-golem. Orlin, Jon. “It’s 2012 Already So Where Are All The Jetsons Flying Cars.” Techcrunch. 1 Jan. 2012, techcrunch.com/2012/01/01/its-2012-already-so-where-are-all-the-jetsons-flying-cars/. Padir, Taskin. “Making Robots That Can Work with Their Hands.” The Conversation. 24 Apr. 2017, theconversation.com/making-robots-that-can-work-with-their-hands-73831. Pardes, Arielle. “The WIRED Guide to the Internet of Things.” Wired. Condé Nast. 10 Feb. 2020, www.wired.com/story/wired-guide-internet-of-things/. “PARRY Encounters the Doctor.” IETF. Network Working Group. 21 Jan. 1973, tools.ietf.org/rfc/ rfc439.txt. “The Past, Present and Future of Bomb Disposal Robots.” Europa Government. 20 June 2018, www.governmenteuropa.eu/bomb-disposal-robots/88618/. Perlman, Merrill. “The History of Using ‘Quantum’ to Mean ‘Really Big’.” Columbia Journalism Review. 4 Aug. 2014, archives.cjr.org/language_corner/language_corner_080514.php. Piore, Adam. “To Study the Brain, a Doctor Puts Himself Under the Knife.” MIT Technology Review. 9 Nov. 2015, www.technologyreview.com/2015/11/09/247535/to-study-the-brain-adoctor-puts-himself-under-the-knife/. “Planetary Voyage.” NASA. JPL. Jet Propulsion Laboratory, voyager.jpl.nasa.gov/mission/science/ planetary-voyage/#:~:text=The%20twin%20spacecraft%20Voyager%20201,moons%20of%20 the%20two%20planets. “Prometheus.” Theoi Project. Greek Mythology. 2017, www.theoi.com/Titan/TitanPrometheus. html. Piccinini, Gualtiero. “The First Computational Theory of Mind and Brain: A Close Look at McCulloch and Pitts’s ‘Logical Calculus of Ideas Immanent in Nervous Activity.’” Synthese, vol. 141, no. 2, 2004, Springer Nature, doi: 10.1023/B:SYNT.0000043018.52445.3e. Price, Tabitha, et al. “Vervets Revisited: A Quantitative Analysis of Alarm Call Structure and Context Specificity.” Scientific Reports, vol. 5, 19 Aug. 2015, SpringerNature, doi.org/10.1038/ srep13220. Prososki, Lisa. “In Search of Myths and Heroes. Jason & The Argonauts.” PBS. 2005, www.pbs. org/mythsandheroes/myths_four_jason.html. Reynolds, Emily. “The Agony of Sophia, the World’s First Robot Citizen Condemned to a Lifeless Career in Marketing.” Wired. Condé Nast. 1 June 2018, www.wired.co.uk/article/sophiarobot-citizen-womens-rights-detriot-become-human-hanson-robotics.

Robotics_2.indd 472

10/2/2020 12:37:41 PM

Bibliography | 473 Ridden, Paul. “Honda’s 3E Robotics Concepts Just Want to Help People.” New Atlas. 10 Jan. 2018, newatlas.com/3e-robotics-concepts-honda-ces/52911/. “RIP Asimo: A Look Back at the Life of Honda’s Famed Humanoid Robot.” Designboom. 2018, www.designboom.com/technology/honda-ceases-production-asimo-robot-history-look-backlife-06-29-2018/. “Road Traffic Injuries.” WHO. World Health Organization. 7 Feb. 2020, www.who.int/news-room/ fact-sheets/detail/road-traffic-injuries. “Robot Doctor Gets Thumbs-Up from Patients.” NBC News. 11 May 2004, www.nbcnews.com/ id/4946229/ns/health-health_care/t/robot-doctor-gets-thumbs-up-patients/#.XxC44C3Mw6U. “Robots ‘to Replace up to 20 Million Factory Jobs’ by 2030.” BBC News. 26 June 2019, www.bbc. com/news/business-48760799. Rosen, Jacob, Blake Hannaford, and Richard Satava, editors. Surgical Robotics: Systems Applications and Visions. Springer, 2011. Rosen, Rebecca J. “Unimate: The Story of George Devol and the First Robotic Arm.” The Atlantic, 16 Aug. 2011, www.theatlantic.com/technology/archive/2011/08/unimate-the-story-of-georgedevol-and-the-first-robotic-arm/243716/. Rosheim, Mark. Leonardo’s Lost Robots. Springer, 2006. Ross, Rachel. “Who Invented the Refrigerator?” Live Science. 8 Feb. 2017, www.livescience. com/57797-refrigerator-history.html. Rowe, Chris. “The Top 10 Design Flaws in the Human Body.” PG. Pittsburgh Gazette. 27 Jan. 2019, www.post-gazette.com/opinion/Op-Ed/2019/01/27/The-top-10-design-flaws-in-thehuman-body/stories/201901270040. Saxena, V. K. The Amazing Growth and Journey of UAVs & Ballistic Missiles Defence Capabilities. Vij Books India, 2013. Saygin, Ayse Pinar, Ilyas Cicekli, and Varol Akman. “Turing Test: 50 Years Later.” Minds and Machines, vol. 10, 2000, pp. 463–518, doi.org/10.1023/A:1011288000451. Schaub, Alexander. Robust Perception from Optical Sensors for Reactive Behaviors in Autonomous Robotic Vehicles. Springer Vieweg, 2017. Schmitt, Kristen A. “Robotics Can Get Girls into STEM, but Some Still Need Convincing.” Smithsonian. 16 Aug. 2016, www.smithsonianmag.com/science-nature/robotics-good-girlssome-still-need-convincing-180960131/. Schuchmann, Sebastian. “History of the Second AI Winter.” Medium. 12 May 2019, towardsdatascience.com/history-of-the-second-ai-winter-406f18789d45. Schuchmann, Sebastian. “Probability of an Approaching AI Winter.” Medium. 17 Aug. 2019, towardsdatascience.com/probability-of-an-approaching-ai-winter-c2d818fb338a.

Robotics_2.indd 473

10/2/2020 12:37:41 PM

474 | Bibliography Schweisfurth, Tim, and René Chester Goduscheit. “From the Pyramids to Apollo 11—Can AI Ever Rival Human Creativity?” The Conversation. 5 Feb. 2020, theconversation.com/fromthe-pyramids-to-apollo-11-can-ai-ever-rival-human-creativity-131163. “The Security and Privacy Issues That Come with the Internet of Things.” Business Insider. 6 Jan. 2020, www.businessinsider.com/iot-security-privacy. Seyfarth, Robert M., Dorothy L. Cheney, and Peter Marler. “Vervet Monkey Alarm Calls: Semantic Communication in a Free-Ranging Primate.” Science Direct, vol. 28, no. 4, Nov. 1980, pp. 1070–1094, www.sciencedirect.com/science/article/abs/pii/S0003347280800972. Shelley, Mary Wollstonecraft. Frankenstein, or, The Modern Prometheus. Boston Library Society, 1941. “SHRDLU.” Stanford University, hci.stanford.edu/~winograd/shrdlu/. Simon, Matt. “The Tricky Ethics of Knightscope’s Crime-Fighting Robots.” Wired. 21 Dec. 2107, www.wired.com/story/the-tricky-ethics-of-knightscopes-crime-fighting-robots/. Simonite, Tom. “The WIRED Guide to Quantum Computing.” Wired. Condé Nast. 24 Aug. 2018, www.wired.com/story/wired-guide-to-quantum-computing/. Singer, P.W. “Drones Don’t Die—A History of Military Robotics.” History Net, www.historynet. com/drones-dont-die-a-history-of-military-robotics.htm. Smith, Aaron, and Janna Anderson. “AI, Robotics, and the Future of Jobs.” Pew Research. 6 Aug. 2014, www.pewresearch.org/internet/2014/08/06/future-of-jobs/. Smith, Aaron, and Monica Anderson. “Americans’ Attitudes Towards Driverless Vehicles.” Pew Research Center. 4 Oct. 2017, www.pewresearch.org/internet/2017/10/04/americans-attitudestoward-driverless-vehicles/. Sofge, Erik. “Tale of the Teletank: The Brief Rise and Long Fall of Russia’s Military Robots.” Popular Science. 7 Mar. 2014, www.popsci.com/blog-network/zero-moment/tale-teletankbrief-rise-and-long-fall-russia%E2%80%99s-military-robots/. Sone, Yuji. Japanese Robot Culture: Performance, Imagination, and Modernity. Palgrave Macmillan, 2017. “Sony Launches Four-Legged Entertainment Robot.” Sony. 11 May 1999, www.sony.net/SonyInfo/ News/Press_Archive/199905/99-046/. Spice, Byron. “PIONEER Robot Is Dedicated at Chernobyl Sarcophagus.” SCS. Carnegie Mellon University. 2 June 1999, www.scs.cmu.edu/news/pioneer-robot-dedicated-chernobylsarcophagus. Spicer, Dag. “The Echo IV Home Computer: 50 years later.” CHM. Computer History Museum. 31 May 2016, computerhistory.org/blog/the-echo-iv-home-computer-50-years-later/. Stearns, Peter N. The Industrial Revolution in World History. Routledge, 2013.

Robotics_2.indd 474

10/2/2020 12:37:41 PM

Bibliography | 475 Stefferud, Einar. “The Logic Theory Machine: A Model Heuristic Program.” Rand Corporation. 1963. history-computer.com/Library/Logic%20Theorist%20memorandum.pdf. Stenquist, Paul. “In Self-Driving Cars, a Potential Lifeline for the Disabled.” New York Times, 7 Nov. 2014, www.nytimes.com/2014/11/09/automobiles/in-self-driving-cars-a-potentiallifeline-for-the-disabled.html. Sterling, Amy. “Millions of Jobs Have Been Lost to Automation. Economists Weigh In on What to Do about It.” Forbes. 15 June 2019, www.forbes.com/sites/amysterling/2019/06/15/automatedfuture/#712620d1779d. Stone, Brad. Gearheads: The Turbulent Rise of Robotic Sports. Simon & Schuster, 2003. Stone, Zara. “Everything You Need to Know about Sophia, the World’s First Robot Citizen.” Forbes. 7 Nov. 2017, www.forbes.com/sites/zarastone/2017/11/07/everything-you-need-toknow-about-sophia-the-worlds-first-robot-citizen/#1aec7a2846fa. “Survey X: Artificial Intelligence and the Future of Humans.” Elon University. 2020, www.elon. edu/e-web/imagining/surveys/2018_survey/AI_and_the_Future_of_Humans.xhtml. Swade, Doron. The Difference Engine: Charles Babbage and the Quest to Build the First Computer. Penguin Books, 2002. “Talos.” Theoi. Theoi Project. 2017, www.theoi.com/Gigante/GiganteTalos.html. Taylor, Alan. “The 1939 New York World’s Fair.” The Atlantic. 1 Nov. 2013, www.theatlantic.com/ photo/2013/11/the-1939-new-york-worlds-fair/100620/. Torchinsky, Jason. Robot, Take the Wheel: The Road to Autonomous Cars and the Lost Art of Driving. Apollo Publishers, 2019. “The Triple Revolution.” Marxists. International Socialist Review, vol. 24, no. 3, Summer 1964, pp. 85–89, www.marxists.org/history/etol/newspape/isr/vol25/no03/adhoc.html. Turing, Alan. “Computing Machinery and Intelligence.” Mind, vol. 59, no. 236, 1950, pp. 433–60. JSTOR, www.jstor.org/stable/2251299. Tzezana, Roey. “Singularity: Explain It to Me Like I’m 5-Years-Old.” Futurism. 3 Mar. 2017, futurism.com/singularity-explain-it-to-me-like-im-5-years-old. “Unlocking the Secrets of Animal Locomotion,” Berkeley. 2002, www.berkeley.edu/news/media/ releases/2002/09/rfull/robots.html. “U.S. School Spending Per Pupil Increased for Fifth Consecutive Year, U.S. Census Bureau Reports.” Census. 21 May 2019, www.census.gov/newsroom/press-releases/2019/schoolspending.html. Uttal, William R. Neural Theories of Mind: Why the Mind-Brain Problem May Never Be Solved. Taylor & Francis, 2005. Valencia, Sebastian. “The Lisp Approach to AI (Part 1).” Medium. 28 Feb. 2017, medium.com/ai-

Robotics_2.indd 475

10/2/2020 12:37:41 PM

476 | Bibliography society/the-lisp-approach-to-ai-part-1-a48c7385a913. Vincent, James. “Former Go champion Beaten by DeepMind Retires after Declaring AI Invincible.” The Verge, 27 Nov. 2019, www.theverge.com/2019/11/27/20985260/ai-go-alphago-lee-se-dolretired-deepmind-defeat. Vinciguerra, Thomas. “Robby the Robot: From ‘Forbidden Planet’ to Auction Block.” New York Times. 10 Nov. 2017, www.nytimes.com/2017/11/10/arts/design/robby-the-robot-auctionbonhams.html. Vinge, Vernor. “Technological Singularity.” Whole Earth Review. 1993. frc.ri.cmu.edu/~hpm/ book98/com.ch1/vinge.singularity.html. Walker, John. “The Oh-My-God Particle.” Fourmilab. 4 Jan. 1994, www.fourmilab.ch/documents/ OhMyGodParticle/. Walmsley, J. Mind and Machine. Palgrave MacMillan. 2012. Walsh, Toby. “Robots Can Outwit Us on the Virtual Battlefield, so Let’s Not Put Them in Charge of the Real Thing.” The Conversation. 31 Oct. 2019, theconversation.com/robots-can-outwitus-on-the-virtual-battlefield-so-lets-not-put-them-in-charge-of-the-real-thing-126104. Walter, W. Grey. “An Imitation of Life.” Scientific American. 1950, Computer Science, Texas A&M Univ.,robotics.cs.tamu.edu/dshell/cs643/papers/walter50imitation.pdf. Warraich, Haider. “A ‘Robot’ Doctor Told a Patient He Was Dying. It Might Not Be the Last Time.” Vox. 13 Mar. 2019, www.vox.com/first-person/2019/3/13/18262481/robot-doctor-remotetelepresence-care-terminal-patient. Wasserman, Harvey. “People Died at Three Mile Island.” Huffpost. 26 Mar. 2009, www.huffpost. com/entry/people-died-at-three-mile_b_179588. Watanabe, Go, editor. Robotic Surgery. Springer, 2014, pp. 25–27. Weaver, Frank John. “What Exactly Does It Mean to Give a Robot Citizenship?” Slate. 6 Nov. 2017, slate.com/technology/2017/11/what-rights-does-a-robot-get-with-citizenship.html. Webb, Richard. “Quantum Physics.” NewScientist. 2020, www.newscientist.com/term/quantumphysics/. Webster-Wood, Victoria. “Biohybrid Robots Built from Living Tissue Start to Take Shape.” The Conversation. 9 Aug. 2016, theconversation.com/biohybrid-robots-built-from-living-tissuestart-to-take-shape-62759. Weisberger, Mindy. “The ‘Perfect’ Human Body Is Not What You Think.” Live Science. Purch. 22 June 2018, www.livescience.com/62895-building-the-perfect-body.html. West, Darrell M. “Brookings Survey Finds 52 Percent Believe Robots Will Perform Most Human Activities in 30 Years.” Brookings. 21 June 2018, https://www.brookings.edu/blog/ techtank/2018/06/21/brookings-survey-finds-52-percent-believe-robots-will-perform-most-

Robotics_2.indd 476

10/2/2020 12:37:42 PM

Bibliography | 477 human-activities-in-30-years/. “What Are Little Girls Made Of?” Star Trek. www.startrek.com/database_article/what-are-littlegirls-made-of. “What Does It Mean To Be Human?” Smithsonian. Human Origins. 13 Aug. 2020. https:// humanorigins.si.edu/evidence/behavior/stone-tools#:~:text=Early%20Stone%20Age%20 Tools,cores%2C%20and%20sharp%20stone%20flakes. “What Is Distracted Driving?” NHTSA. U.S. Department of Transportation. 2019, www.nhtsa. gov/risky-driving/distracted-driving. “Whirlwind Computer.” History-Computer. 2020, history-computer.com/ModernComputer/ Electronic/Whirlwind.html. White, Douglas J. “Heuristic Programming.” IMA Journal of Mathematics Applied in Business & Industry, vol. 2, no. 2, 1989, pp. 173-88, Oxford Academic, doi.org/10.1093/imaman/2.2.173. Wiggers, Kyle. “Sweeping Changes: How iRobot Evolved from Military Robots to Autonomous Vacuums.” Venture Beat. 18 June 2019, venturebeat.com/2019/06/18/sweeping-changes-howirobot-evolved-from-military-robots-to-autonomous-vacuums/. “William Godwin.” Stanford University. Center for the Study of Language and Information. 2016, plato.stanford.edu/entries/godwin/. Williams, Matt. “What Are the Parts of an Atom?” Universe Today. 15 Dec. 2015, www. universetoday.com/82128/parts-of-an-atom/. Zeller, Frauke, and David Harris Smith. “The Hitchbot’s Guide to Travelling across a Continent.” The Conversation. 26 Sept. 2014, theconversation.com/the-hitchbots-guide-to-travellingacross-a-continent-31920. Zimmer, Carl. “Meet the Animats.” National Geographic. 2 Aug. 2013, www.nationalgeographic. com/science/phenomena/2013/08/02/meet-the-animats/.

Robotics_2.indd 477

10/2/2020 12:37:42 PM

Robotics_2.indd 478

10/2/2020 12:37:42 PM

ABOUT THE AUTHOR Micah L. Issitt is an independent scholar, historian, journalist, editor and author. He is the editor of H.W.Wilson’s Reference Shelf, a series of contemporary issues. Recent titles include: Alternative Facts, Post-Truth & the Information War; The South China Seas Conflict; Artificial Intelligence; and Immigration. He has written the following Opinions Throughout History volumes from Grey House Publishing: National Security vs. Civil & Privacy Rights; Immigration; Gender: Roles & Rights; Drug Use & Abuse, and The Environment. Issitt has also written extensively for several Salem Press series, including Careers In, Defining Documents in World History and Defining Documents in American History. His other books include Hidden Religion from ABC-Clio and Hippies from Greenwood Press, and he has written hundreds of articles for a variety of encyclopedias and reference works. Issitt lives and works in Saint Louis, Missouri.

Robotics_2.indd 479

10/2/2020 12:37:42 PM

Robotics_2.indd 480

10/2/2020 12:37:42 PM

INDEX 1939 World’s Fair 152, 163 3E program 303-304 Abrahamic faiths 3-4, 7, 14, 31, 122 Abrahamson, Peter 225 abstract strategy 266, 273 Adami, Chris 339 Adams, James 155 aerial combat 235 age of automation 20 age of humanity 396 age of machines 396 AI autumn 136 AI industry 141, 145 AI programs 135-136 AI research 83-84, 102, 131-137, 140, 146, 237-238, 317, 345, 392 AI technologies 140 AI winters 131, 133, 136-138, 146 AIBO 281, 288-294, 296, 360, 364-365 Alexa 125, 141, 291, 308, 311-318, 338, 352 algorithms 20, 55-56, 99-100, 102, 142, 144, 174-175, 215, 265, 270-271, 277, 299, 303, 308, 310, 323, 337, 388-389 Alibaba 145 Alpha Proton X-ray Spectrometer 255 AlphaGo 141, 274-277, 280 AlphaStar 141, 237 Alquist 37-41 Amazon Mechanical Turk 268 Amiga 270 analytical engine 23, 25-29 anarchism 34 Anderson, Hans Christian 286 Anderson, Mark Robert 51, 272 Android Amusement Corporation Robots 283 Angle, Colin 286

Robotics_2.indd 481

animats 339-340, 344 anthropomorphization 114-115 Apollo mission 249 Apollonius Rhodius 166 Apple Inc. 100 Arctec Systems 285-286 Arduino system 223 Ares Vallis 254 Argonauts 166, 177 Argyreos 5, 167 Ariel 286 Ashton, Kevin 352 Asimov, Isaac 47-48, 51-52, 58, 218, 220, 286 atomic energy 193, 382 Atomic Robot Man 219, 226 atomic weapons 382 automated knitting 17 automated programming 282, 354 automated weaponry 157, 239 automated weaving 19-21 automatic speech recognition (ASR) 308 automobile industry 149, 370 autonomous movement 298 autonomous vehicles 156, 158 Babbage, Charles 22-23, 25, 29 Bargar, William 202 Barnes, G. M. 63 Bassett, Charles 246 BattleBots 225 Beiser, Vince 184-185 Beley, Gene 283 Bell Laboratories 308 ben Bezales, Rabbi Judah Loew 167-168 Bernardi, Gabriella 389 Big Bang 397 binary signals 386

10/2/2020 12:37:42 PM

482 | Index biobots 340-342 bioengineering 401 biological computer 338, 402 biorobotics 331-333, 335, 337-341, 343 Bohr, Niels 383 bomb disposal robots 193, 195, 198 Bombe machine 54, 61 Bosnian War 235 Branch, Julie 321 British Science Research Council 134 Broadhurst, Sid 61 Brooks, Rodney 286 Burden Neurological Institute 77 Bush, George H.W. 69 BVV2 pre-processor 156 C-3PO 122-126 C-Leg 336 Callas, John 259 Čapek, Karel 36, 38, 48 Carey, Bridget 288-289 Carnegie Mellon Robotics Institute 169 Caro-Kann Defence 272 Carroll, Lewis 78 Carter, Jimmy 239, 251 Cartwright, Jon 384 Caves of Steel, The 48 Center for Technology Innovation 374 Central Intelligence Agency (CIA) 235 Chaffee, Roger 246 Chandler, Bill 61 Chaplin, Charlie 88, 98 chatbot technology 115, 312 chemical weapons 37 Chernobyl 183-185, 187, 198 chess programs 270 Chessmaster 2000 270 Chinese AI initiatives 140 Chinese mythology 8 cholecystectomy 202, 205

Robotics_2.indd 482

Chollet, Francois 142 Christianity 3, 14, 167 “City of Tomorrow” 153, 155 cloud computing 311, 392 Clynes, Manfred Edward 333 cochlear implants 333, 336-337 Colby, Kenneth 106 Cold War 67, 71 Colossus Mark II 60-61 combat technology 235 Commodore 64 270 communism 87 computational evolution 20 computational systems 71, 73, 81, 83, 137, 402 Computed Tomography (CT) 201 computer age 22, 65 Computer Aided Manufacturing (CAM) 90 computer engineering 66, 71, 401 computer programming 22, 24, 50, 57, 75, 99 computer science 15, 54, 57, 65, 69, 71, 86, 183, 286, 308, 381-382 Computer-Aided Design (CAD) 90 Computing Machinery and Intelligence 55, 58 Confucius 8 consciousness 32, 43, 48, 51, 74, 80, 116, 283, 311, 325, 328, 335, 397, 402 conservatism 31 consumer market 13, 158, 230, 288, 311 Coombs, Allen 61 Corbitt, Helen 350 corporate espionage 17, 21 Cortana 141, 285, 307, 311, 314-316, 318 Cortes, Jan 124-125 Cosmos 138, 381, 389, 392 Covid-19 145, 212, 214 craftsmanship 4, 8-9 Crocodile Schneider Torpille Terrestre 230 Cromwell, Oliver 17 Cruden, Robert 88 cryptographers 54, 61

10/2/2020 12:37:42 PM

Index | 483 Cullen, William 347 cybercrimes 353 cybernation 92, 94-96 cybernetic augmentation 337 cybernetics 51, 79, 128, 305, 331-333, 338, 343 cyborg 333-337, 340, 344 da Vinci, Leonardo, 1, 10-11 da Vinci surgery system 205 Daly, Chris 270 Dante II 195-196 Darwin, Charles 76 data ownership 353 Deep Blue 265, 271-273, 276, 279 Deeper Blue 272 DeepMind 141, 237, 265, 274-276, 280 Defense Advanced Research Projects Agency (DARPA) 157, 309 Denning Mobile Robotics laboratory 169 Devol, George 90, 98 Dickmanns, Ernst 156 digital age 21-22, 59, 65, 135, 235 digital assistant 125, 315 digital data 67, 210, 316, 353, 382 Dijkstra, Edsger Wybe 102 domestic helper robots 294 Dota agent 143 DragonDictate 311 driverless cars 149, 151, 158-159, 162 droids 119, 124-126, 128, 169, 200, 257, 284-285 drone technology 71, 195, 232 E3 designs 305 ECHO IV 348-350, 368 Eckert, John Presper, Jr. 69 Edison, Thomas 308-309 EEG (electroencephalogram) technology 77 Einstein, Albert 347, 391 Eisenhower, Dwight D. 70 electric typewriter 349

Robotics_2.indd 483

electrical engineering 63, 66 electronic engineering 54 Electronic Numerical Integrator and Computer (ENIAC) 62-72 ELIZA effect 114-117, 260, 263, 312, 328 emotion-driven decisions 51 Employment Act of 1946 95 encryption 61 endoscopic surgery 205 energy transmission 391 Engelberger, Joseph F. 90 engineering 1, 9-10, 12-13, 54, 60, 62-63, 66, 68, 7071, 77, 80, 84-85, 92, 120, 124, 141, 155-156, 166167, 180, 194, 200-201, 217-218, 221-226, 237, 268, 298, 301, 340-342, 349-350, 368, 395, 401 enigma codes 54 entertainment robot 288, 296 Eurisko 237 Evans, Oliver 347 evolutionary biology 76, 401 Evon, Dan 258 existential crisis 151 Exo III 335 exploration rovers 255 factory mechanization 92 Fadell, Tony 352 Fasulo, Mike 290 Fensom, Harry 61 Flores, Dan 160 Flowers, Tommy 61 Forbidden Planet 120-121, 129, 244, 285 Ford Motor Company 88 Forrester, Jay Wright 66 fossil fuel industry 151 Foundation Trilogy 48 Four Laws of Robotics 50-51 Fraeman, Abigail 259 Frankenstein principle 50 Frankenstein, Victor 35

10/2/2020 12:37:43 PM

484 | Index Franklin, Benjamin 268 Freeman, Theodore 246 Futurama exhibition 155, 160 futurism 152, 354, 398, 407 futurists 140, 151-152, 367, 396, 399, 402, 409 Gallup poll 55 Gartner’s Hype Cycle 139 General Atomics 157, 234 General Electric Appliance Park 70 General Motors 90, 155, 160, 370 genetic innovation 399 German war codes 54 Ghosn, Carlos 160 Gillula, Jeremy 314 glass tubes 61 global economy 22, 145 global positioning system (GPS) 234 global warfare 232 Go (or Weiqi) 265, 273-279 Goduscheit, René Chester 395, 403, 407 Godwin, William 34, 45 Golden Records 251-252 Gongshuzi 8, 12 Google Assistant 315, 318 Google car 161 Google Voice Search application 311 Graham-Paige Motors Company 152 Gralla, Preston 314-315 graphic chess system 270 graphic user interfaces (GUI) 270 Grecian philosophy 4 Greek mythology 1, 35, 45 Greiner, Helen 286 Grissom, Virgil 246 Gulf War 238 Gunkel, Patrick 209 hacker 102, 353 Halo 311

Robotics_2.indd 484

HanDol 276 Hanson Robotics 323-325 Hanson, David 323, 328 Harder, D.S. 370 Hassabis, Demis 275 Hawking, Steven 402 health aides 294 Heisenberg, Werner 383 Helsinki Institute of Physics 389 Hephaestus 4-5, 12-13, 33, 48, 166-167, 298 Her 311-312 heuristic programming 83-84, 86 high-tech weapons 230 Hindret, Jean 17 hitchBOT 319-322, 330 Hitler, Adolf 61 Holland, Owen 79 homeostatic system 333 Homer 1, 4-5, 12-14, 246 Honda Corporation 298, 305 Hopper, Grace Murray 68-69 Hosoi, Anette 208 Hui, Fan 274-275 human behavior 76, 320, 360 human brain 37, 78, 123, 201, 274, 276-277, 279, 337, 399 human consciousness 74 human error 149-151, 161 human intelligence tasks (HIT) 268 human language 21, 99, 106 human mind 53, 100, 106, 132, 139, 149, 267, 276-277, 335, 382, 389, 398-400, 402 human psychology 106, 126-127, 402 human speech 100, 308-309, 311, 338 human vs. machine 19 human-robot interaction 53, 322 humanoid robot 298, 302, 305-306, 323 hybridization 340 hybrots 338-339

10/2/2020 12:37:43 PM

Index | 485 I, Robot 48-49, 220 IBM 69, 265, 270-273, 309-310, 387 ImageNet competition 141 imagination 14, 70, 127, 167, 220, 272, 276, 279, 283, 291, 356, 406 Industrial Revolution 15, 19, 26, 29, 32, 35-36, 43, 95, 97, 406 information processing language (IPL) 82 Institute of Electrical and Electronics Engineers (IEEE) 134 Institute of Photonics and Nanotechnologies 389 intellectual activity 28, 277 intelligence, artificial 1, 2, 9, 12, 16, 31, 35, 47, 50-56, 73-86, 99-107, 116, 120, 123, 127, 131-146, 157, 166, 168, 176, 197, 200, 226, 236-238, 260, 272, 275-277, 281, 289, 303, 305, 311-313, 324, 329, 339, 345, 367, 379, 382, 388, 390-407 intelligence, human 45, 53, 76, 81, 116, 138, 139, 145, 268, 272, 275-279, 312-313, 324, 343, 389, 396, 398, 400-407 intelligence, machine 53, 56, 57, 71, 73, 84, 115, 134, 136, 137, 140, 298, 307, 312-313, 354, 367 International Covenant on Civil and Political Rights 325 Internet 51, 115, 125, 137, 150, 164, 226, 243, 257-258, 285, 289-290, 294, 308, 312, 320, 346, 348, 350, 352, 362, 368, 374, 379-380, 403 Internet of Things 294, 350, 352, 368 Interstellar 126-128, 250 Intuitive Surgical 203, 205-207 iRobot Corporation 286 Iron Curtain 183 Islam 3, 14 Jacquard loom 20-21, 29 Jacquard, Joseph-Marie 20 Jet Propulsion Laboratory 253, 258, 264 Jewish mythology 31, 167 Johnson, Lyndon B. 87, 92 Johnson, Patrick 125

Robotics_2.indd 485

Jordan, I. 138 Judaism 3, 14, 167 karakuri ningyō automata 10 Kasparov, Garry Kimovich 271 Kennedy, John F. 244, 264 Kennedy, Philip 337 Khryseos 5, 167 killer robots 238, 241 King, Martin Luther, Jr. 92 Klatu the “Robot Butler” 283 Kline, Nathan 333 Knightscope 171-175, 177 Komincz, Grzegorz “MaNa” 237 Korby, Roger 335 Kurzweil, Ray 398 Kwoh, Yik San 201 Lampe, David 169 Lang, Frit z88 laparoscope 202 Latimer, Bruce 122 laws of robotics 47, 49-53, 56-58, 120 Lee, William 16, 28 Lenat, Doug 237 Lighthill Report 133, 138, 146 Lighthill, James 135 Lincoln, Abraham 75, 85 linguistics 99, 101 LISP language 102 Little Mermaid 286 Logic Theorist 83-85 Lorenz system 61 Lost in Space 121, 218 Lovelace, Ada 22-24 Ludd, Ned 18 Luddite 19-20 lunar Orbiter missions 249 Machina speculatrix 78-79

10/2/2020 12:37:43 PM

486 | Index machine citizenship 328 Madrigal, Alexis 336 Maelzel, Johann Nepomuk 268 magnetism 155 Mahan, Steve 161 Maly, Tim 333 manual laborer 37 Margolis, Jacob 259 Mark I Colossus 61 Mark II computers 69 Mars rovers 124, 243, 256, 264 Marshall, Barry 338 Martian atmosphere 255 Martian landscape 124, 196, 243, 256 Massachusetts Institute of Technology (MIT) 66, 73, 286 mathematical physics 69 Matsuzaki, Kenji 188, 192 Mauchly, John 69-70 Max Planck 391 McCarthy, John 83, 102, 134 McCulloch, Warren S. 75 McFarland, David 81 McKinsey & Company 137 McKinsey Global Institute 372 McNamee, David 204-205 mechanical aids 336 mechanical dogs 5, 10, 167 mechanical engineering 13, 77, 92, 395 mechanization 15, 48, 88, 92, 335 medical droid 200 medical technology 200 Melhuish, Chris 80 metalworking 4, 9, 13, 33, 166-167 metaphysical ideas 43 Metropolis 88, 98, 282 Metz, Rachel 171-172 microcomputing 352 microprocessors 186, 336-337 Microsoft 60, 172, 308, 311, 314-316, 386

Robotics_2.indd 486

Milbanke, Anne 23 military development 229-230 military robots 241, 287, 296 Miller, Marvin 121 mind-brain problem (MBP) 74 Minsky, Marvin 83, 102, 208, 213, 284 mobility system 196, 287 Mohr, Catherine 206 Moravec’s paradox 138 Mother Earth 48 MQ-1 Predator 157 MS-DOS computers 270 MuseNet 144 Musk, Elon 236, 241 mythical automatons 298 mythology 1, 4, 8-10, 13, 19, 31, 35, 44-45, 48, 167, 335 Nadella, Satya 315 Napoleon Bonaparte 268 Nathan, Senthil 202 National Aeronautics and Space Administration (NASA) 246 natural selection 266, 395, 398 NaturallySpeaking system 311 Neiman Marcus 350 Nest system 352 neural mechanisms 75 neural nets 76 neuroscience 74 New York Sci Fi Convention of 1950 219 Newell, Allen 82 Newell, Homer 246 Newman, Jared 219-220 Nikkei Asian Review 370, 380 Nolan, Christopher 126 Novag Robot Adversary 270 nuclear energy 180, 183 nuclear engineering 180 O’Connell, Sanjida 79

10/2/2020 12:37:43 PM

Index | 487 Obama, Barack 303 Olympic Games 266 Omni 208, 215 Omnibot 2000 285 OpenAI 143-144 Opportunity and Spirit rovers 256, 261 Ordinance Ballistic Research Laboratory 62 organ transplantation 332 Osterman, Paul 378 pacemaker 333 PackBot robots 287 Padir, Taskin 299-300 Paleolithic era 405 PARRY program 106-114, 115 Payne, Wendy 160 Pearl, Robert 206 Pebble in the Sky 48 Perkins, Jacob 347 phonograph 252, 308-309, 318 photographic technology 249 physics 69, 207, 381-384, 386, 389, 392-393, 397 Pinchot, Justin 220 Pitts, Walter H. 75 planetary-encircling dust event (PEDE) 259 popular culture 28, 35, 43, 49, 70, 88, 193, 222, 226227, 343, 395 Porotti, Riccardo 390 Potter, Steve 338 Prati, Enrico 389 Predator drone 234 pretechnological age 19 Principia Mathematica 82-83 privacy 137, 290, 307, 313-314, 316-317, 368 programmable computers 60-61 programming language 102 Prometheus 33-35, 44-45 protohumans 405 psychons 75 psychotherapy 106

Robotics_2.indd 487

public opinion 132, 137, 159 public safety 49 PUMA 560 91, 201-202, 205 punch cards 20-21, 23, 282 QT-1 50 quanta 391 quantum algorithms 388 quantum computing 381-383, 385-389, 391-393 quantum mechanics 382-383, 391-392 quantum physics 381-384, 386, 392-393 quarks 383 Quasar Industries 283-284, 295 qubits 381, 386-390, 392-393 Queen Nefertiti 323 Quintana, Ernest 211 QWERTY system 103 R2-D2 123-125, 220, 285 R.U.R. (Rossum’s Universal Robots) 31, 36, 38, 40, 44 radiation poisoning 181, 183, 246 radioactive waste 179, 183, 191, 197 Ramsey, Erik 337 Rand Corporation 69, 82, 85 rationalism 48 Ray, Johnny 337 RedZone Robotics, Inc. 183 Remington Rand, Inc. 70 Restelli, Marcello 390 retrofuturism 282 Reynolds, Emily 328 Robby the Robot 120-121, 129, 244 Roberts, Alice 123 Robocop 333, 335 robopsychology 50 robot competitions 225 robot dexterity 301 Robot Ethics Charter 52 Robot Lilliput 219 Robot Wars 217, 223-226, 238

10/2/2020 12:37:43 PM

488 | Index Robotapedia 220 robotic aesthetic 218 robotic code 53 robotic minds 50 robotic pets 281, 288 robotic security industry 171 robotic space exploration 257 robotic sports 223-224, 227 robotic surgery 201-207, 214-215 robotic toys 226, 281, 295 robotic vacuum 52, 288, 294 robotic weapons 231 roboticists 181, 188, 202, 218, 223, 246, 283, 286 Robotics Institute 169, 183, 195 robotics kits 217-218, 223 Robots and Empire 49 Robots: The Movie 303 Romkey, John 352 Roomba 281, 286-288, 295, 340, 362 Rosenblatt, Frank 134 Rossum 36-42 RQ-1 Predator 234 rudimentary program 20 Russell, Bertrand 82-83 Sagan, Carl 251-252 Samuel, Arthur 133 Sargon 270 Schrödinger’s cat 384-385 Schrödinger, Erwin 383-384 Schweisfurth, Tim 403 science fiction 343 Se-dol, Lee 274-275, 277-278 search engine design 312 Sebastian Schuchmann 137 security robots 170, 173-174 security systems 165, 177, 345, 353 See, Elliot 246 selenium 49-50 self-driving car 12, 160-163

Robotics_2.indd 488

servomechanisms 66-67, 88, 90 Servomechanisms Laboratory 90 servomotors 90 sex robots 312-313 Shakey 168-169, 177 Shannon, Claude 83, 270 Shaw, John Clifford 82 Shelley, Mary 33-34, 48 Shelley, Percy Bysshe 33 Shepherd, Kyle 321 Shoebox 309-310 SHRDLU 102-106, 117 sign language 303 silk textile industry 16 Simon, Herbert Alexander 82 Singer, P.W. 234 Siri 100, 125, 141, 285, 308, 311-315, 317-318, 338, 405 Sirk, Douglas 221 smart home technology 282, 345-346, 367 smart thermostats 352 smartphones 295, 351-353 smartwatch 60 Smith, David Harris 320-321, 330 Smithsonian 161, 296 Sneider, Helen Belle 152 Snopes 243, 258, 263 software and hardware 55, 60, 223 Sophia 319, 323-330 Soujourner rover, 254-255 space exploration 80, 127, 155, 197, 243-244, 246, 257, 262-263, 286, 334, 343 Space Race 141, 155-156, 163, 246 SPD-13 49-50 speech recognition 308-311, 318, 323 Spencer, Percy Lebaron 348 Stanford University 106, 155-156 Star Trek 180, 334, 344, 402 Star Wars 119, 122-126, 128, 169, 180, 200, 205, 283, 285, 335

10/2/2020 12:37:44 PM

Index | 489 steam generators 180 Stenquist, Paul 160 Sterling, Amy 377 Stertz, Brad 160 Stevenson, Adlai 70 strategy game 267, 273-274 subatomic realm 382 subspecies 401, 407 supercomputer 271 surgical robot 201 Surveyor spacecraft 249 Sutherland, Jim 349, 352 Talos 2, 166-167, 177, 298 Tamaschelli, Dario 390 Tangora system 310 TARS and CASE 126-127 techno-utopia 282 technological revolution 36, 43 technological skepticism 151 technology gap 410 telecommute 212 telepresence robotics 211 teletanks 231-232 Tesla 158 textile industry 16-17, 20, 88 theory of relativity 384 There’s Always Tomorrow 221 thermionic tubes 61 Thorpe, Marc 223 Three Laws of Robotics 47, 49, 51-53, 56, 58 Three Mile Island 180-181, 183, 187, 190, 198, 210 Total Hip Arthroplasty (THA) 202 toy robot 217, 220-221, 227, 289 transhumanism 401 triple revolution 87, 92, 94, 96, 98 Truth, Sojourner 254 Turing Test 55-58, 105-107, 116-117 Turing, Alan 47, 53-54, 58-59, 61, 75, 270 Turochamp 270

Robotics_2.indd 489

Umson, Chris 161 UNIMATE 90-92, 98, 370 Unimation Inc. 90 Universal Automatic Computer (UNIVAC) 68-71 Universal Automation 90 US Army 62, 64, 66 US Census Bureau 70 US Department of Labor 96 US Naval Reserve 69 US Navy 67-68, 234, 286 US spacecraft 246 vacuum tubes 61-63 Valkyrie 300 VaMoRs system 157 Vaucanson, Jacques de 20 Vincent, James 274-275 Vinge, Vernor 396-397 virtual assistants 136, 285, 307-308, 311-313, 316-317, 338 Vitanov, Nikolay 389 voice assistants 141, 291, 308 von Kempelen, Baron Wolfgang 268 Voyager 1 250-251 Voyager 2 250-251, 263 Walsh, Toby 236-237 Walt Disney Company 323 Walter, Nicholas 79 Walter, William Grey 76-77, 79 War on Terror 157, 195, 235 Warraich, Haider 212 Warren, Bunny 79 Weaver, John Frank 325 Webster-Wood, Victoria 340 wedge bot 224-225 Weizenbaum, Joseph 106 Welsby, Jason 80 West, Darrell M. 374

10/2/2020 12:37:44 PM

490 | Index Wheelbarrow 194 Whirlwind computer 65, 68, 72 White, Ed 246 Whitehead, Alfred 82 Whittaker, William “Red” 181 Whitworth, J. 26 Wickersham Land Torpedo 230 Wickersham, Elmer 230 Wiikwemikong First Nation 321 Wilkinson, Stuart 81 Windows phone 315-316 Winograd, Terry 104 Winter War 231-232 Winterberg, Susan 377 wireless communication 195, 351 wireless fidelity (Wi-Fi) 351 Wollstonecraft, Mary 33, 45

Robotics_2.indd 490

World Trade Center 287 World War I 37, 230, 241, 346 World War II 61-62, 69, 71, 218, 230, 232, 234 World War II codes 61 worm holes 397 Worshipful Company of Framework Knitters 17 x-ray 181, 255 Yan Shi the Artificer 8 Yonhap 275-276 Young Rossum 37, 42 Zeller, Frauke 320 Zeroeth Law 50 Zhou Dynasty 8, 273

10/5/2020 11:11:59 AM

https://online.salempress.com

The Series National Security vs. Civil & Privacy Rights Immigration Gender: Roles & Rights Drug Use & Abuse The Environment Social Media Issues The Death Penalty Voters’ Rights

Robotics & Artificial Intelligence

VIEW THIS BOOK ONLINE Grey House Titles Are Now Available on the Salem Press Online Platform!

opinions Throughout

history Robotics & Artificial Intelligence

Presidential Authority Guns in America Globalization Issitt

4919 Route 22, Amenia, NY 12501 518-789-8700 • 800-562-2139 • FAX 845-373-6360 www.greyhouse.com • email: [email protected]

GREY HOUSE PUBLISHING