Computational Modelling of Robot Personhood and Relationality (SpringerBriefs in Computer Science) 3031441583, 9783031441585

This SpringerBrief is a computational study of significant concerns and their role in forming long-term relationships be

110 101 1MB

English Pages 116 [104] Year 2023

Report DMCA / Copyright

DOWNLOAD FILE

Polecaj historie

Computational Modelling of Robot Personhood and Relationality (SpringerBriefs in Computer Science)
 3031441583, 9783031441585

Table of contents :
Preface
Acknowledgements
Contents
Part I Androids: Persons and Relationships
1 Introduction
1.1 Motivation
1.2 The Pioneers
1.3 Androids
1.4 Motivational Background
References
2 Significant Concerns
2.1 Introduction
2.2 Values
2.3 Components of Value
2.4 Needs
2.5 Spiritual Concerns
2.6 Summary
References
3 Personhood and Relationality
3.1 Introduction
3.2 Personhood
3.3 Relationality
3.4 Social Objects
3.5 Spiritual Relationships
3.6 Formation of the Relationship
3.7 Summary
References
Part II The Affinity System
4 The Computational Model
4.1 Introduction
4.2 Environment
4.3 Agents
4.3.1 Message Passing
4.3.2 Value Memory
4.3.3 Connections
4.4 User Interface
4.5 Event Loop
4.6 Summary
Reference
5 Modelling Concerns
5.1 Introduction
5.2 Representing Quantities
5.3 Comparing Uninstantiated Quantities
5.4 States
5.5 Values
5.6 Attitudes
5.7 Social Objects
5.8 Summary
References
6 The Economy
6.1 The Economy
6.2 User Interface
7 Narratives
7.1 Introduction
7.2 Operations
7.3 Narratives
7.3.1 Asking for Care
7.3.2 Motility
7.3.3 Make Friends
7.3.4 Reply to Request
7.3.5 Maintain Friendships
7.3.6 Give Friends Care
7.3.7 Give Others Care
7.3.8 Find a Leader
7.3.9 Find a Follower
7.3.10 Reply to Leader
7.4 Missions
7.5 User Interface
7.6 Concurrency
7.7 Discussion
Reference
8 Analysis of Value Systems
8.1 Introduction
8.2 The Data Set
8.3 Non-negative Matrix Factorisation
8.4 Model Selection
8.5 Results
8.5.1 Prediction
8.5.2 Clustering
8.6 Summary
References
9 Conclusion
9.1 General Summary
9.2 Example Application
9.3 Beyond the Scope
9.3.1 Consciousness
9.3.2 Personality and Norms
9.3.3 Generative AI
References
Index

Citation preview

SpringerBriefs in Computer Science William F. Clocksin

Computational Modelling of Robot Personhood and Relationality

SpringerBriefs in Computer Science

SpringerBriefs present concise summaries of cutting-edge research and practical applications across a wide spectrum of fields. Featuring compact volumes of 50 to 125 pages, the series covers a range of content from professional to academic. Typical topics might include: • A timely report of state-of-the art analytical techniques • A bridge between new research results, as published in journal articles, and a contextual literature review • A snapshot of a hot or emerging topic • An in-depth case study or clinical example • A presentation of core concepts that students must understand in order to make independent contributions Briefs allow authors to present their ideas and readers to absorb them with minimal time investment. Briefs will be published as part of Springer’s eBook collection, with millions of users worldwide. In addition, Briefs will be available for individual print and electronic purchase. Briefs are characterized by fast, global electronic dissemination, standard publishing contracts, easy-to-use manuscript preparation and formatting guidelines, and expedited production schedules. We aim for publication 8–12 weeks after acceptance. Both solicited and unsolicited manuscripts are considered for publication in this series. **Indexing: This series is indexed in Scopus, Ei-Compendex, and zbMATH **

William F. Clocksin

Computational Modelling of Robot Personhood and Relationality

William F. Clocksin University of Hertfordshire Hatfield, Hertfordshire, UK

ISSN 2191-5768 ISSN 2191-5776 (electronic) SpringerBriefs in Computer Science ISBN 978-3-031-44158-5 ISBN 978-3-031-44159-2 (eBook) https://doi.org/10.1007/978-3-031-44159-2 © The Author(s), under exclusive license to Springer Nature Switzerland AG 2024 This work is subject to copyright. All rights are solely and exclusively licensed by the Publisher, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations, recitation, broadcasting, reproduction on microfilms or in any other physical way, and transmission or information storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar methodology now known or hereafter developed. The use of general descriptive names, registered names, trademarks, service marks, etc. in this publication does not imply, even in the absence of a specific statement, that such names are exempt from the relevant protective laws and regulations and therefore free for general use. The publisher, the authors, and the editors are safe to assume that the advice and information in this book are believed to be true and accurate at the date of publication. Neither the publisher nor the authors or the editors give a warranty, expressed or implied, with respect to the material contained herein or for any errors or omissions that may have been made. The publisher remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. This Springer imprint is published by the registered company Springer Nature Switzerland AG The registered company address is: Gewerbestrasse 11, 6330 Cham, Switzerland Paper in this product is recyclable.

Preface

This book emerged from the conviction that the study of Artificial Intelligence (AI) must involve more than the algorithms for problem-solving and machine learning that have been investigated for over fifty years and that AI research needs to be attentive to the foundations that underpin and enable human intelligence. Human intelligence is based upon a foundation of human needs and desires as practiced in relationships between persons and other entities; real and imagined. Certain concerns of human relationships are so important and significant to humans that they transcend ordinary human activity and discourse, and they result in meanings that can be difficult to represent and interpret in terms of information processing. If sufficient progress is made in understanding this foundation, it may be possible to formulate some information processing principles underlying human personhood and relationality, and to simulate some aspects of social intelligence in the form of android intelligence. This book freely draws concepts and notation from philosophy, psychology, sociology, theology, mathematics, and computer science. It was written with the intention not to give a deep and thorough treatment of any one topic. I have deliberately set aside discussion of the important moral and ethical issues that exude inexorably from the topic of artificial intelligence because such issues deserve a more comprehensive treatment than it is possible to do here. As such, I hope sufficient parts of this book may be accessible to most readers, and I tender my apologies to those for whom I have not explained things well enough. Páfos, Cyprus August 2023

William F. Clocksin

v

Acknowledgements

I would like to thank Fraser Watts, Marius Dorobantu, Sara Savage, Rowan Williams, Mark Vernon, Karl MacDorman, John Barresi, Katherine A. Johnson, Léon Turner, Shalom Schwartz, Harris Wiseman, and Jane Bailey, for valuable and enjoyable conversations during the preparation of this book. I continue to be inspired by the influential memories of my mentors Roger Needham and Karen Spärck-Jones, who are no longer with us. The infelicities of prose, and the errors of thought and judgement, are solely my responsibility. I thank the Templeton World Charity Foundation for their support of this work on the project Understanding Spiritual Intelligence, TWCF-0542, awarded to the International Society for Science and Religion. The opinions expressed in this publication are those of the author and do not necessarily reflect the views of the Templeton World Charity Foundation.

vii

Contents

Part I

Androids: Persons and Relationships

1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3 1.1 Motivation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3 1.2 The Pioneers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 1.3 Androids . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6 1.4 Motivational Background . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12 2 Significant Concerns . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.2 Values . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.3 Components of Value . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.4 Needs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.5 Spiritual Concerns . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.6 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

15 15 16 19 20 21 22 23

3 Personhood and Relationality . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.2 Personhood . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.3 Relationality . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.4 Social Objects . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.5 Spiritual Relationships . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.6 Formation of the Relationship . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.7 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

25 25 26 30 33 36 38 39 40

Part II

The Affinity System

4 The Computational Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45 4.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45 4.2 Environment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46 ix

x

Contents

4.3 Agents . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.3.1 Message Passing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.3.2 Value Memory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.3.3 Connections . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.4 User Interface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.5 Event Loop . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.6 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Reference . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

49 50 50 52 52 54 56 56

5 Modelling Concerns . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.2 Representing Quantities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.3 Comparing Uninstantiated Quantities . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.4 States . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.5 Values . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.6 Attitudes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.7 Social Objects . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.8 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

57 57 57 59 60 60 62 62 64 64

6 The Economy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65 6.1 The Economy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65 6.2 User Interface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67 7 Narratives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.2 Operations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.3 Narratives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.3.1 Asking for Care . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.3.2 Motility . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.3.3 Make Friends . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.3.4 Reply to Request . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.3.5 Maintain Friendships . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.3.6 Give Friends Care . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.3.7 Give Others Care . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.3.8 Find a Leader . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.3.9 Find a Follower . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.3.10 Reply to Leader . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.4 Missions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.5 User Interface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.6 Concurrency . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.7 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Reference . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

69 69 69 70 70 71 72 73 73 74 75 75 76 76 77 78 78 78 80

Contents

xi

8 Analysis of Value Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.2 The Data Set . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.3 Non-negative Matrix Factorisation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.4 Model Selection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.5 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.5.1 Prediction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.5.2 Clustering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.6 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

81 81 82 82 85 85 86 86 88 89

9 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.1 General Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.2 Example Application . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.3 Beyond the Scope . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.3.1 Consciousness . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.3.2 Personality and Norms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.3.3 Generative AI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

91 91 93 94 94 95 95 96

Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 99

Part I

Androids: Persons and Relationships

The chapters of Part I aim to arrive at a specific understanding of artificial intelligence in how it relates to and is formed by relationships between persons who participate in social groups. Persons include the hypothetical case of androids. An android is defined as a human-like robot that humans would accept as equal to humans in how they perform and behave in society. An android as defined in this book is not considered to be imitating a human, nor is its purpose to deceive humans into believing that it is a human. Instead, the appropriately programmed android self-identifies as a non-human with its own integrity as a person. Therefore, a computational understanding of personhood and how persons—whether human or android—participate in relationships is essential to this perspective on artificial intelligence. Two assumptions form the background of Part I. First, a circularity is inevitable in the definition of person and relationship, as persons are defined by the relationships in which they participate, and relationships are defined by the persons that participate in them. Second, culture and identity within a sociocultural context are both recognised as the source of all social activity. Personhood and relationality are manifested within a cultural context, and cannot be separated from it. Yet, while needs and values are defined by the cultural context, we will also refer to needs and values as relating to the internal states of individual entities who are motivated to perform personhood and relationality within the cultural context.

Chapter 1

Introduction

Abstract The introductory chapter defines the motivation for writing this book. It introduces the general topic of androids as understood in this book, and places it in the context of AI research.

1.1 Motivation This book is motivated by the goal to work toward computational models of intelligence in the form of android intelligence. An android is a human-like robot that people would accept as equal to humans in how it performs and behaves in society. We ask questions about what would it take to consider androids as persons who can engage in long term relationships with humans and other androids. We explore these questions by implementing a basic agent-based simulation of a population of entities that are motivated to develop relationships with each other and to care for each other. In this course of this book we look at areas of psychology that consider human motivation and personality, and we introduce the key concepts of significant concerns, relationality, and personhood. We conclude that further research in artificial intelligence will need to focus on developing computer models of how people engage in relationships, how people explain their experience in terms of stories, and how people reason about the things in life that are most significant and meaningful to them: Their affinities, beliefs, values, needs and desires. These topics have long been studied in psychology, sociology, philosophy and theology, but studies have not always yielded theories specific enough to be defined in computational form. Therefore, one aim of research in android intelligence is to develop a computational model of reasoning about relationships and stories about relationships, and connecting these with the needs and desires that facilitate the android’s engagement with society.

© The Author(s), under exclusive license to Springer Nature Switzerland AG 2024 W. F. Clocksin, Computational Modelling of Robot Personhood and Relationality, SpringerBriefs in Computer Science, https://doi.org/10.1007/978-3-031-44159-2_1

3

4

1 Introduction

1.2 The Pioneers Artificial Intelligence (AI) is the study of computer models of intelligent behaviour. AI has been an active area of research since the 1950 when electronic computers became sufficiently powerful to execute programs to solve simple problems requiring logical reasoning, pattern matching, and searching for solutions to puzzles. For the last nearly 70 years, researchers in AI have continued to explore the idea of intelligence as problem solving. This research has resulted in computer programs that can perform tasks such as translating human speech into written words, translating documents from one language to another, planning complicated travel routes, and playing the games of Chess and Go at world-beating level. Only relatively recently have some AI researchers taken seriously the idea that intelligence may be based on principles other than a capacity to solve abstract problems through the basic operations of symbolic reasoning, pattern matching, and search. The pioneers of AI research considered AI as an entirely distinct subject from human intelligence. From the late 1990s, the term artificial general intelligence (AGI) has been used to describe the hypothetical ability of a computer to understand or learn any intellectual task that a human can. AGI research has become extremely diverse, ranging from brain research to techniques for learning how to solve problems that are specified more generally, for example, a program that can learn how to play any game by observing examples of gameplay. AGI research also considers philosophical and ethical issues such as whether fully intelligent and conscious machines could pose an existential threat to human existence. The term artificial intelligence originated at a workshop in 1956, where pioneers of the subject formed an agenda of work that would take us to the present day. As problem-solving was to them the most conspicuous manifestation of human intelligence, the idea was set that intelligence is based on a capability to solve problems expressed in a symbolic form. In addition, representations of objects and events in the world would need to be represented in symbolic form within the computational model. This assumption became known as the physical symbol system hypothesis [23] and was applied to domains of AI including proving logical theorems, planning, learning, processing human languages, visual perception, and robotics. A paper published in 1961 by one of the pioneers Marvin Minsky [22] is remarkable because while it set out the standard AI techniques of problem-solving, search, planning, learning and logical reasoning that defined the mainstream programme of work for the next 60 years, it also anticipated and introduced two key ideas that are at the root of the most up-to-date AI systems today: Bayesian reasoning and reinforcement learning. For about 30 years, the physical symbol system hypothesis was the main driver for AI research, and it soon supplanted other research currents from the 1950s including cybernetics and the earliest efforts to build artificial neural networks. Because of its engineering and technological focus, it also neglected the foundational role in human intelligence of social and emotional behaviours. While pioneering AI resulted in a number of successful problem-solving systems and the establishment of a new

1.2 The Pioneers

5

academic subject, from the 19801980ss onward researchers began to ask why progress in AI research based on solving abstract problems according to the physical symbol system hypothesis seemed to stall. As an alternative, Brooks [1] proposed the physical grounding hypothesis, which held that the AI system needs to be closely coupled, through perception and action, to the real world. The world is considered to be its own best model, so an internal symbolic representation is not always necessary, and the AI system must be embodied to sense and act in the world. Brooks developed the subsumption architecture, which organises behaviour into a hierarchy of computational layers spanning the range from perception to action. Robot perception involves simple sensors such as touch, ultrasonic and visual sensors; and robot action involves actuators such as motors for locomotion and grippers for grasping objects. Each layer in the hierarchy is coupled to the next through signals that can be compared to reflexes. The work of Brooks and colleagues demonstrated that many seemingly life-like behaviours are exhibited by simple robots which use layers of direct mappings from sensors to actuators without the need for any symbolic computation. The renewed development of artificial neural networks from the 19801980ss onward arose from new methods for implementing trainable classifiers that overcame the limitations that artificial neural networks were previously thought to have [26]. While this has led to the new field of Machine Learning (ML) and delivered remarkable results, its connection with intelligence seems to remain within the thought world of early AI. ML techniques have been used successfully to solve problems based on improving the classification and prediction power of functions through experience of input and output data. A significant contribution of ML research has been to show how the solution of complex problems may be implemented by trainable classifiers. Systems based on ML have brought much higher levels of performance to perception and problem-solving tasks such as optical character recognition, speech recognition, coordination of robot sensors and effectors, and game playing. One of the first challenges to the AI pioneers was the development of Affective Computing [25], the idea that understanding and reasoning about emotions has an important part to play in human-computer interaction. Picard was one of the first to argue that AI research should take seriously the role and purpose of human emotions or (‘affect’) when designing intelligent machines. Emotion is a diverse and complicated concept that is richly contested by psychologists and that spans many scientific categories. AI researchers tend to remain agnostic about the biological and psychological definitions of emotions, and consider emotions in terms of their social performance for two different reasons. The first reason is to improve communication between humans and computer controlled devices such as vending machines and vehicles. Machines that understand human emotions could potentially achieve a higher quality interaction with people and improve the user’s experience of interacting with computers. If AI research can arrive at a computationally effective model of how people use emotion to understand each other’s mental states and intentions, devices could in principle be programmed to both perform a kind of emotional display to a user, and to understand emotions that are displayed by users. The aim would be to improve the perceived usability of the device. Currently, working

6

1 Introduction

implementations of this are in early stages, and there is scope for further development. The second reason is to raise questions about whether computers can have the potential to have emotional experiences themselves. This suggests a longer term goal: To be able to model human emotion so well that an intelligent android could be programmed to ‘have’ emotions and use them in a way that people would accept as genuine. The AI pioneers tended to think of emotions as undesirable impediments that can lead to error, or at best, optional extras to improve performance in certain situations such as reacting quickly to a threat. And yet, emotion must have a fundamental role to play in human cognition and problem-solving, because it is clear that people are able to express meaning not only in propositional form but also as the experience of felt meanings [27]. One challenge to the traditional AI approach proposes that emotions should take a foundational role in AI models as an underlying substrate in the cognitive architecture. This foundational role includes mediating the bonding process when relationships are formed through need, desire, and affinity, and being involved with connecting valuations to sensations within the body. We feel that emotions should be modelled in such a way that they are assumed to be not simply physiological changes, nor perceptions of those changes, nor simply social constructions, but as episodes extended in time that unify physiological, cognitive, perceptual and social elements. This way of thinking about emotions will not be covered in this book, although it has influenced the direction that we take. It remains a topic for future study. From the 1990s, some AI researchers have considered that for AI to make progress in achieving human levels of intelligence, study should focus on distinctive characteristics of human intelligence. These characteristics include not only the ability to experience and understand emotions, but also the ability to tell and understand stories, to engage in long-term relationships, and to construct meaning from engagement in social situations. These characteristics go beyond intelligence as abstract problemsolving, and are part of what it means to be a person. Therefore, a new computational framework for AI that moves toward android intelligence needs to consider not only problem-solving, but personhood.

1.3 Androids This book is focused on android intelligence, which goes beyond the problem-solving emphasis of artificial intelligence. An android is defined here as a human-like robot that people would accept as equal to humans in how the android performs in society. An android is not considered to be imitating a human, nor is its purpose to deceive humans into believing that the android is a human. An android is not a duplicate of a human being, nor can it legitimately claim to be the same as a human. Instead, the android self-identifies as a non-human with its own integrity as a person. Some of the terms used in this definition need further explanation. We recognise that human and robot are distinct ontological categories [12]: Robots and humans have entirely

1.3 Androids

7

different origins and are different kinds of things. However, the android is a nonhuman robot that can engage in human society because the android has been designed and programmed to resemble humans in how it engages in relationships with humans. Androids in this sense do not yet exist, but are considered as thought experiments that can help to explore new theories for AI. But first, we should admit there is a great deal about robots that we will not cover in this book. We will not explore the philosophical arguments of whether an android can ‘pass for’ a human, nor whether an android is a simulation or a duplication of human subjectivity and capabilities. As Picard (p. 136) says, Even if someday humans succeed in constructing conscious computers and machines with synthetic sensations, there is no assurance that such mechanisms can duplicate the experience of human subjective feelings. Our feelings arise in a living and complex biological organism, and this biology may be uniquely able to generate feelings as we know them. Biological processes may be simulated in a computer, and we may construct computational mechanisms that function like human feelings, but this is not the same as duplicating them.

We will not go into legal/moral/political arguments of whether androids have rights and responsibilities analogous to human rights and responsibilities. Some of these arguments arise because of a technical definition of the term ‘person’, which includes non-human legal entities such as companies. And these arguments can arise because of the similarity and relationships that humans may have with robots. These are important arguments that raise interesting questions, but they will not be covered in this book. We will also set aside ontological speculation about whether the appropriately programmed android ‘really has’ consciousness or cognitive states. We will simply state that the android as considered here is not programmed to deliberately imitate a human, nor is its purpose to deceive humans into believing that the android is a human. Even so, an appropriately designed android may in certain circumstances pass as human or deceive humans into identifying it as human, but that is not the purpose discussed here. The existence of androids may bring up further philosophical questions about sincerity and authenticity, which are also not topics covered in this book. We will use the term person in a special way. In normal discourse, we refer to humans as persons or people, and there is a vast academic literature on the definition and ontological status of persons and personhood, mainly but not exclusively referring to humans [20]. However, in technical discourse, the term person is used in specific ways. For example, there is the concept of a legal person, which refers to a human or non-human entity that is treated as a person for limited legal purposes. A non-human entity could be an organisation recognised by law such as a corporation. Typically, a legal person, human or otherwise, can bring lawsuits and be sued, own property, and enter into contracts. Another example comes from traditional Christian theology, where there is a concept of person as an individual substance having

8

1 Introduction

a rational nature. Not only humans are persons, but other types of entities can be persons. This notion of person allows for the distinct roles of God as persons, and offers a basis for theological reflection on Trinitarian mysteries. More recent theology stresses a relational view of personhood by rooting the emergence of the human self in relationship. Other religions that are not monotheistic may have a panoply of deities that are recognisably persons and in some cases behave as humans do. For the purposes of this book, we use the term person to refer to an entity, not necessarily human, that can perform personhood, and under the right circumstances be identified or recognised as a person because it engages in relationships with other persons, human or non-human. This moves away from the concept of person as an ontological category, and towards the idea that personhood only makes sense as a relational notion and is only realisable and expressible in community. Finally, there is a difficulty when we say that an android is accepted as ‘equal’ to humans. The android can never be identical to a human being, as androids and humans have entirely different origins and physical makeup, so ‘equal’ in this case cannot mean ‘identical’, nor can it mean ‘of the same ontological category’. We also do not use the term equal in the same way as equality under the law. Humans can be harsh judges of what constitutes equality in society: It is disgraceful that some human societies, both historical and in the present day, have considered even other humans to be unequal in society. Furthermore, it is recognised that humans vary widely in abilities, achievements, appearance and behaviour and are still considered human, so equality here does not imply equality in functionality and appearance. Instead, we say that an android is accepted as ‘equal’ to humans when it is generally recognised as a person, however non-human, and accepted as taking part in social relationships generally as a human would. This definition therefore looks to the relational and performative aspects of personhood that will be explored further in Chap. 2. Androids as defined here do not yet exist, but further research in Artificial Intelligence (AI) may someday culminate in a functioning android. Some areas of AI research have investigated how robots may function fluently in human society as socialised entities. In the area of Android Science [18, 19], the main ideas are to do with the physical appearance of androids, and how people react affectively to robots that resemble people in their physical appearance. The area of Social Robotics [13] is another approach to social intelligence, where research focuses on the social skills needed by robots for comfortable human-robot interaction (HRI). Such social skills include respect for personal space and the physical dynamics involved with interaction. Broader questions about robot personhood have also been posed from a social robotics perspective [16]. There also seems to be a connection between social fluency and the human capability for anthropomorphism, in which we attribute human characteristics to non-human entities. In one study of how people relate to self-driving cars [29], participants using a driving simulator drove either a normal car, or an autonomous vehicle able to control steering and speed, or a comparable autonomous vehicle augmented with additional anthropomorphic features—name, gender, and voice. The results revealed that participants believed that the more anthropomorphic features the vehicle had, the more competently the vehicle would perform.

1.4 Motivational Background

9

While the physical appearance and social skills of the robot can mediate rapport and emotional connection, the definition of android considered in this book is not about appearance, but about meaning in the relationships in which the android participates, and about how participation in meaningful relationships can be a mark of personhood. Examples of androids are provided by science fiction. In the popular Star Wars films, the humanoid robot C-3PO is considered an android by the definition used here. The robot C-3PO is obviously artificial in appearance, but it is fluent in spoken communication and employs its humanoid body in ways meaningful to other humanoids in social discourse. It engages in relationships with the humans around it to the extent that its mechanical appearance is considered inconsequential by the other characters portrayed in the films. Other fictional androids, such as Mr Data in the popular Star Trek: The Next Generation television series, resemble people to an extent both in appearance and in the quality of relationships, and would be considered an android by any definition. The future emergence of androids would pose challenges to human society, ranging from the existential to the legal, ethical and practical. While others have written extensively on these important topics, this book will confine itself to the information processing aspects of androids in simulated society, and will not address the effects of android intelligence on human society.

1.4 Motivational Background The rationale of this book can be better understood by seeing how the ideas presented here have developed over a period of decades. I have been engaged in research on Artificial Intelligence since the mid-1970s, having published my first journal article in 1980 [2]. Following the trends of the time, I considered human intelligence as the capacity to solve problems in the perception and manipulation of physical objects, and that intelligence is based on a capability to solve abstract problems expressed in a symbolic form. In addition, representations of objects and events in the world would need to be represented in symbolic form within the computational model. This collection of ideas was known as the physical symbol system hypothesis [23]. Following this mainstream, I was inspired by an approach to problem solving based on reasoning using various types of formal logic such as first-order predicate calculus and modal logics. As a keen computer programmer, I was delighted to learn that a programming language based on predicate logic using Horn clauses [28] was under development at the time. I was tutoring graduate students in AI programming using LISP and POP-2, and another researcher in the department was tutoring students in programming in the new language called Prolog. While my students were learning the fundamentals of list processing in preparation for writing AI programs, the Prolog students were already writing interesting AI programs for solving puzzles and parsing a subset of English language. Seeing the advantages of writing programs in what became known as a declarative style instead of an imperative style, I began learning and teaching programming in the Prolog language [4, 10], being convinced at the

10

1 Introduction

time that the AI systems of the future would be built upon mechanisms for logical problem solving. However, from the 1980ss onward, some researchers began to ask why progress in AI research based on solving abstract problems according to the physical symbol system hypothesis seemed to stall. In particular, Drew McDermott [21] argued that ...the skimpy progress observed so far is no accident, and in fact it is going to be very difficult to do much better in the future. The reason is that the unspoken premise ...that a lot of reasoning can be analysed as deductive or approximately deductive, is erroneous. (p. 151)

I agreed that AI should be taking a new direction, and I eventually arrived at the conclusion that mainstream AI research tended to neglect the foundational role in human intelligence of social, emotional and narrative behaviours characteristic of all human activity. And, many of these behaviours are unique to humans, so they may have something to do with the uniquely human capabilities involving intelligence. This way of conceptualising AI was at odds with the idea of intelligence as abstract problem-solving. I had also been inspired by the ecological perception approach of psychologist J.J. Gibson [15] that an organism’s problems of perception and action are defined within its ecological niche, and that selection pressures over evolutionary timescales select for organisms that solve these problems. I became disenchanted with the physical symbol system hypothesis, and began looking at ways that intelligence would be mediated not by formal problem solving using abstract representations, but as some type of computational process based on time-extended narrative forms that might be directly associated with emotional and social experience within the ecological niche. Many AI researchers at the time reacted to their disenchantment with the logical reasoning approach by switching attention to the field of Machine Learning (ML). However, I considered ML to be another form of problem solving, this time by using trainable classifiers that could approximate solutions from examples of input. ML techniques have many advantages, but having also been involved with ML research [11, 17, 24], I felt that the ML approach by itself would not be able to address the big questions of how intelligence emerges from social interaction in the environment. I began by looking at the way humans organised and communicated knowledge and meaning not in abstract formal terms, but in the form of myths. Myths are a basic form of story-telling through which humans have made sense of the world and of relationships with each other. For millennia stories in the form of myths have been acted, danced, sung, and written, passed down the generations. The word ‘myth’ has a bad reputation in science, because it is associated with the primitive, archaic, untrue, and irrational: The stories told by myths are so foreign to experience that they cannot be true. But myths cannot be assessed purely on their literal surface content. Metaphor and myth both constitute legitimate ways of expressing the meaning

1.4 Motivational Background

11

and structure of the knowledge and concerns of all human beings. Most of human experience is uncertain, imprecise, contradictory, confusing, and paradoxical, and humans negotiate all this with more or less success depending upon their disposition and capabilities. The extraordinary stories in myths offer ways to give meaning to the puzzles of ordinary human experience. Surely it is more interesting to understand how intelligent systems might handle imprecise and contradictory aspects of experience without necessarily arriving at a truth-consistent conclusion, than it is to understand how logical conclusions can be drawn by straightforward deduction from a set of axioms. I suggested ways in which the representation of knowledge by machine might be interpreted according to a hermeneutic of myth and archetype [3]. This led to an interest in how a surprisingly large amount of human knowledge and behaviour seems to be constructed by engagement in society and story-telling within society, rather than a result of empirical and logical reasoning. Following the ‘social constructionist’ path [14], I argued that an individual’s intelligent behaviour is shaped by the meaning ascribed to experience, by its situation in the social matrix, and by practices of self and of relationship into which its life is recruited [5]. This perspective was at odds with the dominant structuralist (behaviour reflects the structure of the mind) and functionalist (behaviour is determined by mental components) perspectives that informed AI researchers at the time. Further work attempted to understand the close connection between emotion and memory [7], and attempted to put this together in the form of an overall conceptual framework for AI that gives priority to the social construction of an identity or ‘self’ by engagement with social relationships. This framework suggests that intelligence is constructed through a capacity for relationality—engagement in relationships – that in turn is mediated by a capacity for affect and experiencing felt meaning [6]. If AI is to take the form of androids that can interact convincingly with people in the ways that science fiction stories tell us, then the priority for how intelligent systems behave must be forming and maintaining relationships with people and other androids. Further exploration of this area convinced me of a fact that social psychologists have studied for decades: That people engage not only with other people in social relationships, but they also make sense of their experience by relating to imaginary entities that are like people but whose personhood is implied. Could it be that the same cognitive mechanism that governs our relationships with real people also be employed in our relationships with the imaginary entities such as characters in stories? There seems to be a common factor not only in relationality, but in personhood: Human cognitive abilities seem to be employed in engaging in relationships with persons, whether real or imaginary or non-human, and these abilities will have evolved to satisfy human needs and desires. The areas of interest informing my views on AI therefore widened from social psychology to what is called transpersonal or spiritual psychology, which is about the human need to process attitudes and goals that are highly valued, meaningful, and significant. Such attitudes and goals can be described as transcendent, because they transcend the everyday human needs of food and shelter. They are about the human need to connect with or belong to something greater than itself. These attitudes and goals can also be described as self-actualising, that

12

1 Introduction

is, expressing the human need for self-fulfilment. These ideas led to further work in attempting to understand how the mind of an android must function if it is to be accepted as equal to humans in how the android performs and behaves in society [8, 9]. Further exploration of these ideas, together with writing software to implement an agent-based simulation of entities that form and handle social relationships, led to the writing of this book.

References 1. Brooks, Rodney: Elephants don’t play chess. Robot. Autonom. Syst. 6(1), 3–15 (1990) 2. Clocksin, W.F.: Perception of surface slant and edge labels from optical flow: a computational approach. Perception 9(1), 253–269 (1980) 3. Clocksin, W.F.: Knowledge representation and myth. In: Cornwell, J. (ed.) Nature’s Imagination, pp. 190–199. Oxford University Press (1995) 4. Clocksin, W.F.: Clause and Effect: Prolog Programming for the Working Programmer. Springer, Berlin (1997) 5. Clocksin, W.F.: Artificial intelligence and human identity. In: Cornwell, J. (ed.) Consciousness and Human Identity, pp. 101–121. Oxford University Press (1998) 6. Clocksin, W.F.: Artificial intelligence and the future. Philos. Trans. Roy. Soc. A(361), 1721– 1748 (2003) 7. Clocksin, W.F.: Memory and emotion in the cognitive architecture. In: Davis, D.N. (ed.) Visions of Mind: Architectures for cognition and affect, pp. 122–139. Idea Group Pub. (2005) 8. Clocksin, W.F.: A social and computational perspective on spiritual intelligence. In: Watts, F., Dobrantu, M. (eds.) Perspectives in Spiritual Intelligence (in press) (2023) 9. Clocksin, W.F.: Guidelines for computational modelling of friendship. Zygon (in press) 10. Clocksin, W.F., Mellish, C.S.: Programming in Prolog. Springer (1981) 11. Clocksin, W.F., Moore, A.W.: Experiments in adaptive state-space robotics. In: Cohn, A.G. (ed.) Proceedings of the Seventh Conference of the Society for the Study of Artificial Intelligence and Simulation of Behaviour, pp. 115–125. Pitman (1989) 12. Dautenhahn, K.: Socially intelligent agents in human primate culture. In: Payr, S., Trappl, R. (eds.) Agent Culture: Human-agent Interaction in a Multicultural World, pp. 45–71. CRC Press (2004) 13. Dautenhahn, K.: Socially-intelligent robots: Dimensions of human-robot interaction. Philos. Trans. Roy. Soc. B 362(679–704) (2007) 14. Gergen, K.: An Invitation to Social Construction, 3rd edn. SAGE Publications Ltd (2015) 15. Gibson, J.J.: The Senses Considered as Perceptual Systems. Houghton Mifflin (1966) 16. Jones, R.: Personhood and Social Robotics: A psychological consideration. Routledge (2016) 17. Ladicky, L., Sturgess,P., Sengupta, S., Russell, C., Sengupta, S., Bastanlar, Y., Clocksin, W.F., Torr, P.H.S.: Joint optimisation for object class segmentation and dense stereo reconstruction. Int. J. Comput. Vis. 100(122-133) (2012) 18. MacDorman, K.F.: Introduction to the special issue on android science. Connect. Sci. 18(4), 313–318 (2006) 19. MacDorman, K.F., Ishiguro, H.: The uncanny advantage of using androids in social and cognitive science research. Interact. Stud. 7(3), 297–337 (2006) 20. Martin, J., Bickhard, M. (eds.): The Psychology of Personhood: Philosophical, historical, social-developmental, and narrative perspectives. Cambridge University Press, Cambridge (2012) 21. McDermott, D.: A critique of pure reason. Comput. Intell. 3, 151–160 (1987) 22. Minsky, Marvin: Steps toward artificial intelligence. Proc. IRE 49(1), 8–30 (1961)

References

13

23. Newell, A., Simon, H.A.: Computer science as empirical inquiry: symbols and search. Commun. ACM 19(3), 113–126 (1976) 24. Nopsuwanchai, N., Biem, A., Clocksin, W.F.: Maximization of mutual information for offline Thai handwriting recognition. IEEE Trans. Pattern Anal. Mach. Intell. 28(8), 1347–1351 (2006) 25. Picard, R.: Affective Computing. MIT Press (1997) 26. Rumelhart, David E., Hinton, Geoffrey E., Williams, Ronald J.: Learning representations by back-propagating errors. Nature 323, 533–536 (1986) 27. Smith, Q.: The Felt Meanings of the World. Purdue University Press (1986) 28. van Emden, M.H., Kowalski, R.A.: The semantics of predicate logic as a programming language. J. ACM 23(4), 733–742 (1976) 29. Waytz, A., Heafner, J., Epley, N.: The mind in the machine: anthropomorphism increases trust in an autonomous vehicle. J. Exp. Soc. Psychol. 52, 113–117 (2014)

Chapter 2

Significant Concerns

Abstract This chapter reviews the main concepts around significant concerns and how they are needed for android intelligence, using Maslow’s hierarchy of needs, Almquist, Senior and Bloch’s hierarchy of elements of value, and Schwartz’s Basic Values.

2.1 Introduction This chapter introduces the idea of significant concerns as a set of values, attitudes, preferences, and affinities that are held to be particularly highly valued and meaningful to a person: The means through which a person may find deeply held identity, purpose, and transformation. Significant concerns often engage the emotions and senses in a way that simply holding an opinion or belief may or may not. For example, experiencing a significant concern may provoke deep feelings of awe and wonder in a way that deciding what to have for lunch probably does not, even if the lunch decision involves a rich array of preferences and values. A relationship with another person or entity may form around significant concerns, and as a result may engender feelings of, for example, self-sacrifice or permanent commitment, in a way that a relationship with a business partner may not. In some cases significant concerns need not be accompanied by emotions. For example, a meditative state may be highly significant and yet the aim of the activity could be to experience an absence of emotions and self. Significant concerns also include what Emmons [4] has called ultimate concerns. It is also the case that concerns that we do not consider to be significant can also be accompanied by emotions. For example, a person may feel passionate about a traditional way to cook a favourite meal. Such feelings are important because the person experiencing them strongly holds values related to, for example, tradition and family ties. The differences between significant and non-significant concerns and the emotions that accompany them cannot be rigidly defined, but may depend upon individual differences and circumstances. Nevertheless, it is possible to provide a usable way forward based on the models of needs and values discussed in this chapter.

© The Author(s), under exclusive license to Springer Nature Switzerland AG 2024 W. F. Clocksin, Computational Modelling of Robot Personhood and Relationality, SpringerBriefs in Computer Science, https://doi.org/10.1007/978-3-031-44159-2_2

15

16

2 Significant Concerns

In what follows we will look at some previous models of needs and values and see how significant concerns fit into these models. The psychological literature on needs and values is vast, and we will touch only upon a few of the best known treatments. These topics will be well known to psychologists and sociologists, but this chapter may help researchers in artificial intelligence to appreciate why these topics may be relevant to the design of androids.

2.2 Values Significant concerns can be identified with specific values in several different ways. Basic human values are an increasingly popular concept in psychology and sociology, and they have been widely used to explain and predict various kinds of behaviour, attitudes, and choices [5]. Here we will propose that the intelligent android will need to have its own set of values to understand how to develop relationships with other members of society. The term ‘value’ is a difficult one to use here in the context of computer science and mathematics, because it has a meaning quite different from the one used in psychology and sociology. In computer science, a value is a data object. In a computer program, a value can be associated with a variable name. The object could be a number or some other data structure, and a variable name can used to identify the object or to ‘stand for’ the object. For example, depending on the programming language used, we might use any of the following statements to associate the numeric value 17 with the identifier a: a ← 17 a = 17 a := 17 In each of these statements, we have given the identifier (or variable) a the value 17. However, in this book we will use the term ‘value’ to refer to human values in the sense used in psychology and sociology, and we will use another word such as ‘quantity’ or ‘data object’ to refer to a value in the mathematics sense. For the sake of convenience and conventional usage, we will also use ‘value’ in the mathematics sense when the context is unambiguous, such as in the phrase x has a value in the range 0.0 ≤ x ≤ 1.0. Human values are abstract concepts that are considered important in a society or culture, such as honesty, belonging, security, tradition, and benevolence. Values are relevant to every part of human behaviour, and are involved in the choices we make such as who we befriend, where we live, what we think is important, the clubs to which we belong, and more. Values affect not only personal well being, but also our relationships with other people. Holding a value motivates people to pursue goals that are consistent with the value. There is a vast academic literature on values in psychology, sociology, philosophy, theology, and law. Lists of hundreds of words that describe core values are widely used in the popular self-help and wellness literature,

2.2 Values

17

and core values are an essential ingredient in leadership and organisational studies [6]. We will consider a relatively narrow view of values to do with the individual’s motivation for behaviour in society. Human values have been represented in a variety of ways, such as the VIA-IS model of 24 strengths [10]. We take as our starting point Schwartz’s theory of Basic Values [12]. Schwartz has summarised six main features of values that are implicit in the psychology literature, which we paraphrase here from [12]: 1. Values are beliefs linked to feelings. People for whom independence is an important value become aroused if their independence is threatened, despair when they are helpless to protect it, and are happy when they can enjoy it. 2. Values refer to desirable goals that motivate action. People for whom social order, justice, and helpfulness are important values are motivated to pursue these goals. 3. Values transcend specific actions and situations. Obedience and honesty values, for example, may be relevant in the workplace or school, in business or politics, with friends or strangers. This feature distinguishes values from norms and attitudes that usually refer to specific actions, objects, or situations. 4. Values serve as standards or criteria that guide the selection or evaluation of actions, policies, people, and events. People decide what is good or bad, justified or illegitimate, worth doing or avoiding, based on possible consequences for their values. 5. Values are ordered by importance relative to one another. People’s values form an ordered system of priorities that characterise them as individuals. Do they attribute more importance to achievement or justice, to novelty or tradition? This hierarchical feature also distinguishes values from norms and attitudes. 6. The relative importance of multiple values guides action. Any attitude or behaviour typically involves more than one value. For example, attending the meetings of an organisation might express and promote tradition and conformity values at the expense of hedonism and stimulation values. The tradeoff among relevant, competing values guides attitudes and behaviours. Values influence action when they are relevant in the context (hence likely to be activated) and important to those who hold the value.

Over a period of decades, involving surveys of over 25,000 people in 44 countries over a range of cultures, Schwartz and colleagues have investigated whether there are universal values that are considered important by all, or almost all, people. They suggest there are ten general types of universal value. Each of the ten values is defined in terms of the broad goal it expresses together with some related concepts (paraphrased from [12]):

18

2 Significant Concerns

• Self-direction: creativity; freedom; independence; curiosity; choosing your own goals • Stimulation: daring activities; varied life; exciting life • Hedonism: pleasure; enjoying life • Achievement: success; capability; ambition; influence; intelligence; self-respect • Power: authority; leadership; dominance, social power, wealth • Security: cleanliness; family security; national security; stability of social order; reciprocation of favours; health; sense of belonging • Conformity: self-discipline; obedience • Tradition: accepting one’s portion in life; humility; devoutness; respect for tradition; moderation • Benevolence: helpfulness; honesty; forgiveness; loyalty; responsibility; friendship • Universalism: broadmindedness; wisdom; social justice; equality; a world at peace; a world of beauty; unity with nature; protecting the environment; inner harmony. An early version of the value theory [11] considered the possibility that spirituality might constitute a universal value. The defining goal of spiritual values is finding meaning through transcending everyday reality. However, Schwartz claimed that spirituality did not demonstrate a consistent meaning across cultures, and spirituality was dropped from the theory despite its potential importance in many societies. Another way to recognise the importance of spirituality is not to consider it as a specific value, but to use the term to describe a cluster or category of existing values or needs. The fact is that spirituality may be expressed or experienced in diverse ways. For some, spirituality is about valuing order and tradition; For others, spirituality is associated with freedom and universalism. For others, spirituality is about an encounter with mystery. For others, spirituality is associated with religious practices. This diversity may be why spirituality does not have a consistent meaning across populations, and yet individuals may consider it a significant concern. Another reason demonstrated by other studies may be that the relationship between spirituality and religiosity is complex and not well understood [14], and this may influence a crosscultural diversity in the understanding of spirituality. Schwartz’s study asked respondents to rate how important a set of nouns and adjectives were to them as a guiding principle in their life on a 9-point scale ranging from supreme importance to opposed to their values. People tended to rate most values varying between mildly to very important. A relative importance for each value was defined as the average importance given by the population of respondents. Schwartz’s study used the relative importance results to create a value structure, which depicts a structure of conflict or consistency between values. Some values may conflict with other values and may also be consistent with certain other values. For example, achievement values may conflict with benevolence values. Seeking success for oneself may conflict with actions that could improve the welfare of others who need help. Achievement and power values are usually compatible: Seeking success for oneself tends to strengthen and to be strengthened by actions aimed at enhancing one’s own social position and authority over others. Another example: Pursuing novelty and change might conflict with preserving time-honoured

2.3 Components of Value

19

traditions. By contrast, pursuing tradition values is consistent with pursuing conformity values. Schwartz organised the values as a circular structure along two bipolar dimensions. The first dimension represents openness to change versus conservation, which contrasts independence and obedience. The second dimension is selfenhancement versus self-transcendence, which contrasts the interests of oneself with the welfare of others. In this book we use Schwartz’s set of ten named Basic Values, but we do not use the value structure. There is no doubt that humans experience and report values along the dimensions covered in Schwartz’s value structure analysis. However, for the purposes of the agent-based simulation in this book, we do not impose a structure, but we allow a structure to emerge from the simulation. Precisely which structure emerges will depend upon the initial conditions of the simulation, and how values are used in the model to influence the actions taken by the agents. It is necessary also to make the philosophical point here that values are abstract concepts that exist within a society. When a person is asked whether they regard a value as important, they are recruiting the concept from the culture and understanding it in a way that make sense to them. In this way, humans can be said to ‘possess’ or ‘hold’ an internal set of values. People may interpret the same value differently, and people may hold a particular value by an amount that differs from other people. For example, one person may highly value tradition and belonging, and they are a fervent supporter of a particular sports team. Another person may set less store by those same values, and take no interest in supporting a sports team or belonging to any club. Such individual differences arise because values can be held by a greater or lesser amount by individuals. Therefore we need to distinguish between the abstract concept of a value, and the particular instance or state of a value that is possessed by a person to a greater or lesser degree.

2.3 Components of Value Using the idea of a value instance, we augment the model of values in the following way. For the purposes of the computational model explored later, we define an instance of a value as having three components: importance, degree, and intensity. Each component can be represented by a numerical quantity indicating the amount of the component. First, there is the importance with which an individual holds a value. The importance of a value is what the person reports as a guiding principle in their life. A person can report that they consider honesty to be an important value, because it promotes openness in relationships and orderliness in society. However, the same person may engage in deceit and cheating when it suits them. This suggests the person has a lower degree of honesty, the second component, which is the amount by which the value governs decisions. The third component is the intensity with which a value is held, which governs the amount of action the individual takes relating to this value. We suggest that this interaction of degree, importance and intensity of

20

2 Significant Concerns

an individual’s set of values may help to account for the complexity, inconsistency, conflict, and paradox of human behaviour, and we will return to this point in Part II where this model is implemented in a multi-agent simulation.

2.4 Needs Values are associated with needs and desires of the person. A well known framework for representing needs and desires is Maslow’s hierarchy of needs [7, 8]. Maslow’s original model consisted of five levels or layers of a hierarchy, with physiological (survival) needs at the bottom, and the more creative and intellectually oriented ‘selfactualisation’ needs at the top. Starting at the bottom, physiological needs are requirements for survival, such as food, shelter, sex and sleep. The next level up is about safety needs, which exist because people want to experience order, predictability and control in their lives, for example emotional security, financial security, freedom from fear. The next level up is about love and belongingness needs: There is a need for interpersonal relationships, affiliations, connectedness, and being part of a group. The fourth level up includes esteem needs, which are about self-worth, accomplishment and purpose. The top level includes self-actualisation needs, and refer to the realisation of a person’s potential, self-fulfilment, personal growth, and what Maslow calls ‘peak experiences’. Peak experiences are moments of significant meaning to an individual, and can be accompanied by feelings of awe, joy, and wonder. Maslow later [9] expanded the model to include seven layers. The extra layers represent cognitive needs, aesthetic needs, and transcendence needs. Cognitive needs include knowledge and understanding, curiosity, and the need for meaning. Aesthetic needs are to do with the appreciation of beauty and form, and transcendence needs are about values that transcend beyond the personal self. We associate several of Maslow’s categories with significant concerns, in particular, Transcendence, Self-Actualisation, and Belonging and Love. The fact that these layers are not found together in Maslow’s hierarchy probably indicates a theoretical order in which needs are satisfied rather than an ordering of significance. Almquist, Senior and Bloch [1] (henceforth abbreviated ASB) have designed a model consisting of 30 elements of value that address four kinds of needs they call (in order from bottom to top of the hierarchy) Functional, Emotional, Life Changing, and Social Impact. Their model extends Maslow’s insights by focusing on the behaviour of people around products and services. Functional needs are about elements such as saving time, reducing risk and cost. Emotional needs involve elements such as reducing anxiety, providing fun, entertainment, and wellness. Life Changing elements include self-actualisation, providing hope, and affiliation/belonging. The Social Impact level is about self-transcendence. From the perspective represented in this book, these products and services can be interpreted not in terms of business and marketing as ASB do, but as characteristics of social objects around which people form and maintain relationships through interaction.

2.5 Spiritual Concerns

21

The elements in the top two levels of the pyramid – Social Impact and Life Changing – may be described as significant or spiritual concerns because they transcend basic functional values and physical needs, and they provide the potential for personal and social transformation. Many of the elements in the Emotional level are essential to significant concerns, but the Emotional level focuses more on its foundational role for the person that includes mediating the bonding process when relationships are formed through need, desire, and affinity, and in connecting valuations to sensations within the body [2] to generate ‘felt meanings’ [13]. It is important to note that these hierarchical formats should not be considered rigid. The lower level needs tend to be more-or-less met before higher level needs, but it is not the case that people satisfy all their needs at one level before moving on to the next level. Also, the order in which needs are satisfied may be flexible based on circumstances or individual difference. Furthermore individuals may consider a particular need to differ in importance: One individual may be primarily motivated by financial gain, where another may be motivated by fame and esteem, and some individuals may consider both financial gain and fame as unimportant. For these reasons, our model will represent needs in a different way, not by using a hierarchy, but simply as a vector of quantities with no implied priority. This representation does not imply that needs and values do not act at different levels of priority. There is no doubt that humans prioritise needs and values, and manage to map needs and values onto particular judgements and behaviours. However, for the purposes of the computational model in Part II of this book, we are interested in how this prioritisation and mapping may take place, and how it may emerge from the social environment over a period of time. It is clear [5] that the representation of needs and values at multiple levels may help to explain some of their effects in human behaviour, however our approach is to represent needs and values as independent variables, and then see if and how dependencies between them arise as a result of the agent-based simulation. By initialising the variables to random quantities that represent individual differences, we may also see if needs and values have an effect upon how the simulated entities relate to each other, for example whether they offer care and affiliation to each other.

2.5 Spiritual Concerns Significant concerns are related to the ideas described in the psychology literature as spiritual intelligence [3, 17], which is understood as an adaptive intelligence that enables people to develop their values, vision, and capacity for meaning. By this means, people are enabled to transcend everyday physical and material needs in order to realise a fulness of human potential. We take both significant concerns and spiritual intelligence as relating to the capability of a cognitive entity—whether biological or artificial—to reason and act according to their significant concerns and the significant concerns of those in which they are in relationship.

22

2 Significant Concerns

Using the adjective ‘spiritual’ can be problematic because of diverse meanings that have emerged through history and culture. The term has a concrete meaning in connection with the history of religious and theological thought around topics such as worship, prayer, and mysticism. However, the popular use of the term ‘spiritual’ has become transformed into a synonym of terms such as incorporeal, immaterial, or supernatural; or has become related to concepts such as religiosity or lack of religiosity, the inner life, subjective well-being, the meaning of life, mythology, or paranormal beliefs. And, there are those who insist that anything associated with the term ‘spiritual’ has no place in a scientific study. Instead, this book will use the term ‘significant’ in a specific and bounded way that is similar to how ‘spiritual’ is used by some authors in the spiritual intelligence literature [3, 16]. The notion of spiritual intelligence used in this book is more limited than that of Vernon [15], who writes of spiritual intelligence as the human capacity for being aware of and feeling connected with a greater reality and ground of being. This conception of spiritual intelligence involves the wholeness of human capabilities for affect and for holding a felt experience without necessarily understanding it in rational terms. This wider sense of human experience is a valuable topic of study in transpersonal psychology, but is outside the scope of this book. Non-material concepts such as supernatural and imagined entities are commonly thought of as spiritual concepts. However, while they are not excluded from this model of intelligence, we can consider elements of the supernatural and the imagination as social objects and social others that relate to the upper levels of the value pyramid. It should also be pointed out that the elements of the top two levels are not exclusively spiritual. For example, although hope is traditionally defined as a theological virtue that can be related to a spiritual life, hope is not restricted to the spiritual and can also be meaningful, perhaps in a different way, in a secular and atheistic context. When a person hopes that peace and justice will prevail throughout the earth, that is a different kind of hope than when a child hopes their parents will buy them a horse. The former is a spiritual concern, even if it is rooted in very practical and ethical concerns, while the latter is about material possession that may result in a greater sense of personal well-being.

2.6 Summary We have briefly surveyed the most well-known models for representing needs and values. These models feature a hierarchical or geometrical structure to illustrate some kind of priority or ordering between elements, although there is some flexibility in the model to represent different circumstances or individual differences. All the elements of these models are important, however, the order in which needs are satisfied and values motivate action is not rigidly defined. We do not see people attempting to satisfy all their biological needs before moving on to safety needs followed by belonging needs. Imposing a hierarchy might also imply to some readers, perhaps not deliberately, but certainly incorrectly, that the higher needs are ‘optional extras’

References

23

that are not strictly needed for survival. The more one looks at this, the less useful a hierarchy seems. Similar comments can be made of the oppositional diagram of the Schwartz Basic Values. Whether categories of values can be arranged in oppositional form depends upon how the categories are defined and used. It is true that an oppositional structure is often observed, but this could be more to do with coherence of values within a society rather than an ontological imperative. For our purposes, we prefer not to impose a geometrical structure on these elements, but instead to use them in a simulation model and observe whether any structure emerges from the activity of agents in the simulation.

References 1. Almquist, E., Senior, J., Bloch, N.: The elements of value. Harvard Business Review, pp. 46–53 9 (2016) 2. Clocksin, W.F.: Memory and emotion in the cognitive architecture. In: Davis, D.N. (ed.) Visions of Mind: Architectures for cognition and affect, pp. 122–139. Idea Group Pub. (2005) 3. Emmons, Robert: Is spirituality an intelligence? Motivation, cognition, and the psychology of ultimate concern. Int. J. Psychol. Religion 10(1), 3–26 (2000) 4. Emmons, Robert: The Psychology of Ultimate Concerns: Motivation and spirituality in personality. Guilford Press, New York (2003) 5. Maio, G.: The Psychology of Human Values. Routledge (2016) 6. Malhotra, N., Shotts, K.: Leading With Values: Strategies for Making Ethical Decisions in Business and Life. Cambridge University Press (2022) 7. Maslow, A.H.: A theory of human motivation. Psychol. Rev. 50(4), 370–396 (1943) 8. Maslow, A.H.: Motivation and personality. Harper and Row, New York (1954) 9. Maslow, A.H.: Religions, Values, and Peak Experiences. Ohio State University Press (1964) 10. Peterson, C., Seligman, M.E.P.: Character Strengths and Virtues. Oxford University Press (2004) 11. Schwartz, S.H.: Universals in the content and structure of values: theory and empirical tests in 20 countries. In: Zanna, M. (ed.) Advances in Experimental Social Psychology, vol. 25, pp. 1–65. Academic (1992) 12. Schwartz, S.H.: An overview of the Schwartz theory of basic values. Online Read. Psychol. Cult. 2(1) (2012) 13. Smith, Q.: The Felt Meanings of the World. Purdue University Press (1986) 14. Tanyi, Ruth: Towards clarification of the meaning of spirituality. J. Adv. Nurs. 39(5), 500–509 (2002) 15. Vernon, M.: Spiritual Intelligence in Seven Steps. Iff Books (2020) 16. Watts, Fraser, Dorobantu, Marius: Is there ‘spiritual intelligence’? An evaluation of strong and weak proposals. Religions 14(265), 1–12 (2023) 17. Wiseman, Harris, Watts, Fraser: Spiritual intelligence: participating with heart, mind, and body. Zygon 57(3), 710–718 (2022)

Chapter 3

Personhood and Relationality

Abstract This chapter defines the main concepts around personhood and relationality that form the key foundations of android intelligence. Ideas about relationality are generalised to include relationships that go beyond the relationships between humans.

3.1 Introduction This chapter defines the main concepts around personhood and relationality that form the key foundations of android intelligence. A kind of circularity is inevitable in the definition of personhood and relationship, as persons are defined by the relationships in which they participate, and relationships are defined by the persons that participate in them. This circularity is not a flaw in reasoning, but it points to how each concept cannot be defined nor understood in isolation. It is also important to remember that personhood and relationality are manifested within a social and cultural context, and cannot be separated from it. Yet, while needs and values are defined by the cultural as suggested in the previous chapter, we will also refer to needs and values as relating to the internal states of individual entities that are motivated to perform personhood and relationality within the cultural context. There is a difference between abstract values that circulate in the culture, and ‘concrete’ or ‘reified’ or ‘instantiated’ values that have become a part of the agent’s internal value system. Personhood and relationality are closely interlinked. We set out an informationprocessing stance of personhood and relationality, in which persons in relationships within the context of a society exchange and store information about each other in order to survive and flourish. There are information bearing processes that are extended in time, as any activity between persons takes takes place as a sequence of events over a duration of time. A computational model attempts to understand the information processing required between generalised persons that here are called agents. Key concepts of this information processing view are the disposition of agents and the circumstances in which agents find themselves. The disposition of an agent is defined by its concerns: Its values, attitudes, affinities, identities, habits, © The Author(s), under exclusive license to Springer Nature Switzerland AG 2024 W. F. Clocksin, Computational Modelling of Robot Personhood and Relationality, SpringerBriefs in Computer Science, https://doi.org/10.1007/978-3-031-44159-2_3

25

26

3 Personhood and Relationality

traits, beliefs and motivations. Deep questions are handled in biology, psychology and philosophy to explain how human dispositions are formed. Some elements of the human disposition are formed through biological givens, others develop through experience, and others are recruited from human society. The circumstances are defined by the situation a human finds itself in the physical world and how its life is sustained though aspects of the physical world and the other entities it encounters. The ecological idea of a ‘niche’ is useful here, and the J.J. Gibson’s ideas of ‘ecological psychology’ [10] have been enormously influential. In this connection, it is useful to remember that there is a mutuality between the disposition of an agent and its circumstances: The agent’s disposition is affected by the circumstances it is in, and it is in those circumstances because its disposition propelled it to be. As Gibson wrote (ibid., p. 4), ...it is often neglected that the words animal and environment make an inseparable pair. Each term implies the other. No animal could exist without an environment surrounding it. Equally, although not so obvious, an environment implies an animal (or at least an organism) to be surrounded.

The model of disposition and circumstance explored in this book and the Affinity computer simulation handles only a tiny fraction of what a typical human might encounter. While the model has been informed and inspired by some parts of what is currently known about the human situation, it is by no means a model of human personhood and relationality. The model simplifies and abstracts away from details that are important for humans, and provides generalisations so it can offer an account of some information processing principles for personhood and relationality among a wider variety of agents such as non-human and non-living entities. We might expect evolution and development to have taken advantage of similar principles in human persons and society.

3.2 Personhood Personhood has been used as a concept to understand what it means to be a human person in a wide variety of areas including psychology, sociology, anthropology, philosophy, theology, medicine, and law [20]. Each area has different reasons for defining personhood, and different criteria apply in these areas. Criteria or ‘benchmarks’ of personhood cover a vast range including genetic criteria, cognitive criteria, acting as a moral agent, responsibility under law, having human characteristics, and so forth. Humans attribute personhood to a wide variety of other living beings such as great apes and dogs. The idea of personhood extends to nonliving things and natural events: Some societies think of the sun, winds, thunder, and even rocks as persons.

3.2 Personhood

27

Most treatments of this phenomenon start with the idea of anthropomorphism [25]. Human beings frequently ‘anthropomorphise’, or attribute anthropomorphic features, motivations and behaviours to other animals, artifacts, and natural phenomena. Anthropomorphism seems to exist along a continuum [18]. Epley [9] distinguishes between strong and weak forms of anthropomorphism. Strong forms of anthropomorphism are when the human believes that the nonhuman agent has humanlike traits and explicitly endorses those beliefs, and a weaker form, which is described as a ‘as if’ type of metaphorical reasoning that is still based upon a belief. They conclude, ...the difference between weak and strong versions of anthropomorphism, we suggest, is simply a matter of degree regarding the strength and behavioral consequences of a belief, not a fundamental difference in kind (p. 867).

However, more recent work [1] examines why it is useful to characterise anthropomorphism as an aspect of relationality and not as a form of belief, and we return to this point later. What we need to consider here only starts with anthropomorphism, but must move to what is called personification. Humans not only attribute anthropomorphic features to others, but they also attribute personhood to others. As mentioned above, the term ‘person’ can cover a number of categories that have social, legal and moral consequences. In this book we set aside the concepts involving the term ‘legal person’, and we will not go into the arguments about what constitutes moral persons, as interesting and important as these concepts are. Our focus instead is on the question of how it is that humans are willing to consider a wide variety of entities as persons. The could be good reasons why we do this: To explain and predict the actions of others, and to connect socially with others. There seems to be a basic human cognitive capability to See Others As Persons (SOAP) [5], even when the ‘others’ are not humans. One idea is that SOAP is a capability grounded in Theory of Mind (ToM), the idea that humans (and to an extent some animals) understand other people by attributing mental states to them. This is closely related also to the idea that empathy is a way of understanding the emotions of other people: Both ToM and empathy are ways that humans can explain and predict the behaviour of others. While ToM is about inferring the mental life of others, empathy is about inferring the emotional life of others. The problem with ToM and empathy as a mechanism for SOAP is that humans are able to attribute minds and emotions to various kinds of nonhuman and nonliving entities that possess neither minds nor emotions. Instead, minds and emotions are somehow implied. One fascinating and comprehensive study [16] looks for an explanation for personification in terms of beliefs about an inner essence, feelings of kinship, or perceived effect upon society. That study moves the focus from an individualistic anthropological model of anthropomorphism to a model that is based on persons in the context of society. A recent study [1] makes a further shift to argue that anthropomorphism is

28

3 Personhood and Relationality

not grounded in specific belief systems but rather in interaction. In interaction, a nonhuman entity assumes a place that is usually given to a human interlocutor, which means that it is independent of the beliefs that people may have about the nature and features of the entities that are the subject of anthropomorphism. Without changing anything about that study, it is possible to extend the idea from anthropomorphism to personification. Personification, or to See Others As Persons, is therefore grounded in interaction, during which a non-human entity assumes a place that is usually given to a human when they interact in the context of a society. And, by interaction we include not only proximal verbal and/or behavioural contact, but interaction in the imagination. Our reason for investigating personification is not primarily to better understand how humans do that amazing thing, but to understand the information-processing requirements of personification in the form of a computational model. Such a model would not be intended as a model of human cognition, but it may help in understanding a process that is still a contested topic in cognitive psychology. However, our main purpose is to understand personification as a basic information-processing task that could be the foundation for android intelligence. We will use a rather limited definition of personhood in an information-processing context that will be generalised to non-human and intangible entities, and take for granted that humans can See Others As Persons, even when the others could be human or non-human, living and non-living, tangible and intangible, real and imaginary. However, Turner [23] addresses the more general questions ‘What is a human?’ and ‘What is a person?’ in the context of androids in a way that is consistent with this book. Our starting point is, as always, relationality. It could be said that the chief end of a person – whether human or otherwise – is to participate in relationships. We consider personhood not as an ontological or moral category, but as a performance that is shaped by cognition and contingent sociocultural contexts. There is a moral convention that humans are persons by default, even if they perform personhood in diverse and limited ways that depend upon mental and physical capabilities. One of the aims of artificial intelligence research is to design robots that are more like humans. Machines are not persons by default, but in principle an appropriately designed machine could perform personhood in a way accepted as sufficiently similar to human personhood. Performance may be an unfamiliar way to describe personhood. We prefer this term because it gets away from the term ‘behaviour’, which has come to mean many different things. A performance suggests it takes place for a specific reason, in this case, to perform personhood within social interaction. Using the term ‘performance’ comes with some risks. First, it can imply that the act of performing personhood may be insincere, not authentic, or deceptive: It is not a ‘real’ personhood, merely a performance. This is not how we use the term here. Second, it might imply that performance is a conscious, knowing, or intentional act. We do not use the term this way. A performance of personhood may have some conscious elements, but it will derive from pre-conscious cognition. The idea that persons consciously or unconsciously manage their demeanour in order to engage in social interactions has been studied since at least the 1950s [11]. While our use of ‘performance’ is

3.2 Personhood

29

not identical to Goffman’s ‘presentation’, there are some common factors. Finally, performance is a concept that can be generalised to personhood in non-human entities, where ‘behaviour’ is more commonly associated with humans and other animals. Following the idea of MacDorman and Cowley [19] that engaging in long term relationships is the key benchmark of the human person, we may ask how non-human entities can be considered as persons. The distinct identity of the human as a person is given by the relationships in which it is engaged, whether for biological, social or political reasons. Through relationships over time, the human assembles a long-term existence that develops meaning and purpose. Barresi [3] provides details of this assembly in humans and suggests how robots might accomplish something similar. In this way, humans are defined by their relationships. While humanness may provide distinctive ways that human persons can engage in relationships, we wish to consider the possibility that non-humans can fluently engage in relationships with humans and other non-humans. Throughout this book, an android is defined as a human-like robot that humans would accept as equal to humans in how the robot performs and behaves in society. An android as defined in this way is not considered to be imitating a human, nor is its purpose to deceive humans into believing that it is a human. Instead, the appropriately programmed android self-identifies as a non-human with its own integrity as a person. Although androids are currently a hypothetical concept, the definition of person can be generalised to include androids. Therefore, we consider the term person to include humans and androids. This idea of personhood includes humans, androids, and imagined persons. Humans are persons by default, and androids are appropriately programmed to perform a recognisable personhood. Imagined entities have personhood by implication, but they cannot not perform personhood in a physical way. They are recognisable as persons through accounts in literary works, memory and imagination, and through performance by humans who interpret the implied personhood of imagined entities in art forms. In this way, persons can also perform as proxies of imagined entities. Humans are able to recognise personhood in imagined entities. This capability may be connected to the fact that human relationships are not always directly observable. As humans we assume that other humans are engaged in performance of personhood even when we cannot observe them doing so. The ability to make this kind of assumption may have the additional effect of being able to recognise personhood in non-human entities or in literary works. An important aspect of personhood is that persons have one or more identities. An identity is a concept attributed to or identified by a person in society as a result of disposition and circumstances. A person may identify themselves as a friend, a caregiver, a mother, a righteous person, a criminal, a leader. Some identities are trivial and perhaps short-lived, such as being an avid follower of a celebrity: A human might style their appearance and behaviour in a way that reflects their affection and loyalty to the celebrity. Humans adopt a number of identities over a lifetime that relate to societal expectations, felt meanings, significant concerns, and personal decisions.

30

3 Personhood and Relationality

3.3 Relationality People participate in an immense variety of relationships with other persons and other entities or objects: Human and non-human, living and non-living, tangible and intangible, real and imagined. As far as human relationships are concerned, most of these relationships form around shared affinities or preferences with other humans, for example, a liking for a particular brand of a drink, or following a football team, or having the same hobby. Relationships may form for reasons of business or other shared activity. Humans may consider themselves to be in a relationship with abstract things, for example loyalty to a team (independent of its members), allegiance to a nation, devotion to a spiritual entity. Relationships may involve close emotional attachments to people, other animals, historical figures real or fictional, ideas, organisations, and imagined entities. People name different types of relationships. We have friends, colleagues, partners, acquaintances, lovers, relatives, spouses, pets, and more. These types are not independent: A spouse is also a relative by law, a lover by mutual desire, a partner for practical reasons, and in some cases a colleague. There are different subtypes within these types that depend on factors such as proximity, social formality, physical intimacy, and the costs and benefits of forming and maintaining the relationship. The latter maybe particularly relevant to business relationships. Parameters such as the purpose and strength of a relationship are not necessarily held in equal measure by all participants in a relationship, and a range of different social values and internal dispositions may be involved in forming and maintaining a relationship. All these aspects of relationship are included in what we term relationality. Like anthropomorphism, relationality exists on a continuum. At one end of this continuum are the relationships between proximal living humans. These are the everyday relationships that we enjoy with friends, colleagues, family, and strangers. At the extreme other end are relationships between imaginary entities such as, for example, the deities of the Greek pantheon whose activities are personified by humans and passed down the generations to humans as stories. Between these two ends of the continuum, there are a number of different kinds of relationships. The precise location of a relationship on the continuum may be argued for legal, political or religious reasons, but that is not the point here. To give some examples, without committing to arguments about the precise location of these relationships on the continuum, we have the following: • Relationships between humans that are not proximal, and which may be mediated by various forms of electronic communication or broadcast. • Relationships between humans and former humans (the dead) and not yet fully developed humans (the unborn). • Relationships between humans and living non-human animals that are believed to exhibit some elements of personhood, such as great apes and dogs. • Relationships between humans and machines with which humans can interact, such as chatbots and robots.

3.3 Relationality

31

• Relationships between humans and living non-human organisms such as the account [8] of Julia’s close relationship with a 200 foot tall redwood tree in a California forest [13]. • Relationships between humans and non-living physical entities such natural phenomena and rocks. Sometimes this is referred to as ‘animism’, but the same arguments about personalisation apply. • Relationships between humans and imagined entities such as spiritual beings (angels, deities) implied as having an independent being. • Relationships between humans and imagined entities such as characters in a novel whose being depends upon intentions of the author and the reader. • Relationships between imagined entities such as spiritual beings implied as having an independent being and who interact with each other, such as the deities in a pantheon. • Relationships that may form in the future between robots capable of personification. As previously mentioned, our purpose is not to model relationality among humans, but to understand the information-processing requirements for relationality between any kinds of agents, which we term as persons. Two key ideas are coupled in this model of relationships between persons: Performativity and recognisability. While personhood is considered a performance, moral convention dictates that humans are considered persons by default, regardless of their individual abilities. Persons are also able to recognise the performances of others and to decide what kind of relationship can be formed. The details of both performativity and recognisability will depend upon the culture in which these abilities are embedded. In a relationship between an android A and a human person B, both A and B can perform personhood, and both A and B can recognise the other as a person. This is possible because A and B both have capabilities for performativity and recognisability. While the human B is assumed to have innate abilities for performativity and recognisability, it is assumed that android A is appropriately programmed so that these abilities can develop. It is also possible that A and B can harbour moral values about the other that may affect both performativity and recognisability, but as MacDorman and Cowley (p.380) point out, moral values do not concern what makes an entity a person, but what makes a person a better person from a particular moral standpoint. In our model, performativity and recognisability are influenced by the significant concerns held by the individuals. There can be situations in which a relationship cannot be defined between persons, because recognisability is not symmetric. For example, human person B could form a deeply emotional and meaningful relationship with a character C from a printed book. C’s performance of personhood subsists solely in the story that is told to B and B’s imagination or internal model of C. The reader B can even test the personhood of character C by imagining how C might react in certain situations outside the book. In this example, there is a kind of relationship between a human person B and a fictitious entity C. While the personhood of C can be recognised by B, the recognition cannot be reciprocated. In this case, the human reader B can recognise a

32

3 Personhood and Relationality

person in B’s reading of C’s performance, and yet a relationship is defective because C is a fictional entity unable to recognise the reader B as a person. However, if B imagines a relationship with C, then B’s imagination will also supply C’s recognition of B. The human imagination seems to be able to supply all aspects of performativity and recognisability for imagined entities. Affinity, the agent-based simulation software described in Part II, models three types of relationships. In all cases, whether or not an agent participates in one or more of the relationships depends upon four crucial factors. The first three of these factors we call the disposition of the agent: • The value system of each agent, which are the states of basic values, attitudes, and social objects represented within each agent. The quantities associated with these states are initialised to random values when the simulation is initialised, and do not change during the course of a simulation run. However, there is nothing in the software design to prevent a future version from being able to change elements of the value system during a simulation run. This could be useful for representing ways in which an agent may learn from experience and change its value system accordingly. • The internal economy of each agent (Chap. 6), which is a set of four elements that help to regulate the behaviour of the agent. The elements are analogous to hormones in the human endocrine system, although we stress that the economy is not intended to be a model of the human endocrine system. The elements of an agent’s economy fluctuate in value depending upon events affecting the agent that take place during the simulation. • The narratives of each agent (Chap. 7), which determine the ways in which agents perform sequences of actions extended in time. • The current circumstances each agent is in during the simulation. Agents move in a three-dimensional world bounded by walls, a floor, and a ceiling. Agents may find themselves in close proximity to other agents, or may move to become closer or farther away from other agents. Circumstances may affect how relationships are formed and dissolved. Whether an agent is disposed to engage in a relationship depends upon the above three factors. The types of relationships modelled by Affinity are as follows: • Friendships. Two suitably disposed agents may form a friendship bond. Agents may form as many friendships as are allowed by their dispositions. The main feature of a friendship between two agents is compatibility between their value systems. Some agents, because they have little need for security or whose values and social objects of interest are not compatible with other agents, do not develop friendships. Other agents may form large friendship groups. Friendships can also be broken, and possibly renewed, again based on disposition of the agents. This is not intended as a simulation of human friendship with all its complexity and different layers. In Affinity, there is only one level of friendship, but it would be fair to point out that some of the friendship bonds formed are more flexible than others, again depending up the value systems of the agents involved. Instead of

3.4 Social Objects

33

modelling explicit levels or layers of friendship, one purpose of the simulation is to explore whether such stratification will eventually emerge as the simulation proceeds. • Caregiving/receiving. Depending on disposition, any agent may require care and/or may give care to another agent. Caregiving/receiving is always a temporary and relatively short-term relationship that forms for a particular reason (needing care) and is dissolved when the need is satisfied (sufficient care has been received). Agents may offer care to needy friends (their ‘in group’) if their benevolence value tends to high degree and intensity, and/or they may offer care to non-friends (their ‘out group’) if their universalism value tends to high degree and intensity. • Leading/following. The leader-follower relationship has been widely studied in different contexts. One context is in the workplace [17], where a good working relationship between managers and workers is important so that goals of the employer or organisation may be achieved. Our treatment of leader-follower is not related to the workplace model, but to the world of issues or propositions about which leaders and followers may feel strongly. One such example involves political issues, where leaders may lead a political party and be elected to office. Followers of this leader may act as activists for the issue or as voters who will support the leader’s bid for office. Depending on disposition, any agent may become a leader of one or more agents, or a follower of one or more agents. The leader/follower relationship is a bond similar to friendship, but leaders and followers form the bond through certain elements of the value system such as the power value, and through the importance component of value and attitude states. The type of value system compatibility needed for friendship does not apply to leader/follower relationships, but other types of compatibility and agreement on issues do apply. The implementation details of these relationships are given in Part II.

3.4 Social Objects Social objects are the things around which persons form relationships. Social objects represent the topics about which entities may have opinions, affiliations, and preferences. Compatibility between social object states is one factor in forming relationships between entities. Examples include a shared activity such as a business venture, a shared affiliation such as supporting a football team, a shared preference such as a favourite colour, and a shared event such as a meal or ritual. One or more social objects may be involved in a relationship between two entities. Following [21], elementary social situations can be modelled as the social triad consisting of three components: The Ego (or Self), the social Object, and the Alter (or Other). In this model, the three components of the triad influence and are influenced by each other. Moscovici’s triad is an extension of Allport’s [2] classic definition of social psychology as ‘an attempt to understand and explain how the thought, feeling, and behaviour of individuals are influenced by the actual, imagined, or implied presence

34

3 Personhood and Relationality

of others’, in that the social object included in the triad model Self-Object-Other becomes fundamental for social psychology. The nature of social objects continues to be a topic of current research [14]. While social object theory per se has yet to inform the AI research mainstream, one recent study [24] has implemented the equivalent of social objects in a dialogue system that elicits user preferences on a set of pre-defined topics. The system contains six topics about Japanese popular culture for the dialogue, and each topic has three attributes. Thirty facts about each topic were prepared. The topics and attributes (shown in brackets) were as follows: professional soccer players (appearance, personality, playing style), dogs (appearance, personality, size), Japanese popular music singers (appearance, voice, performance), Pokemon characters (appearance, personality, strength), fashion brands (price, design, name), and tourist destinations in Japan (history, landscape, atmosphere). This study showed how dialogue between a robot and human could be used to elicit the human’s likes and dislikes on a range of topics. What is relevant for this book is their model of social objects that consists of a topic and multiple attributes. Even so, topics and attributes alone are not rich enough to model the wider reality of relationships. Like significant concerns (Chap. 2), the states of social objects can also be modelled using the parameters importance, degree, and intensity. The importance of a social object is what the person reports as important to their activities. The degree is amount held to make decisions, and like basic values, importance and degree may be quite different. The social object may be used with an intensity, which governs the amount of action taken regarding this social object. We will also add the idea of an aspect to social objects. To illustrate this idea, let us use the topic ‘favourite colour’ as a social object. Many children have a favourite colour and engage in conversation with each other about their favourite colour. There is an active literature in psychology research on human colour preference [12, 22]. Suppose person A’s favourite colour is red (red is an attribute), and yet A is willing to form a friendship with person B whose favourite colour is blue, and another friendship with C whose favourite colour is orange. This does not imply that favourite colour is not a social object, and it does not mean that a friendship between A and B is somehow defective. It means that A has preference for its own favourite colour, but it is able to accept as friends others with a wider range of favourite colours. This means that the idea of social object must be augmented to include two aspects of social objects, as follows. Using the example of favourite colour, agents can have a precept, which is preference for their own favourite colour, and they can have an accept, which is preference for the possible favourite colour of other agents. For example, an agent may have blue as a favourite colour, but it can be compatible (through its accept aspect) with entities having a wider range of favourite colours. Using normal distributions, or alternatively multi-modal distributions, it is possible to model a realistic range of precepts and accepts. The obvious use of social objects is when modelling relationships between people. For example, people can share an opinion about a celebrity or a ethical issue, they can share an affiliation with a football team or a political party, and they can share a preference for a colour or type of meal. It should be possible to model the

3.4 Social Objects

35

compatibility or agreement between people through the social objects they have or do not have in common. Compatibility is not a straightforward concept, as two people may have preferences that bring them together, and they form a friendship because they are willing to overlook certain social objects for which they are not in agreement. Similarly, two people may have strong preferences that bring them together, but one person may notice a certain characteristic about the other that prevents a friendship from forming. In common parlance, such characteristics are called ‘red flags’. And, as seen previously in the example about colour preference, two people can become friends despite not having the same favourite colour. Some social objects seem to be more important than others when deciding compatibility. Social objects can also be used when modelling relationships between humans and non-humans and between humans and imagined persons. One example is that people can form deep and meaningful relationships with characters in a literary work such as a book or a film. Using social objects, the relationship between a reader and a character in a book might be modelled in one of two ways. Using the triad notation Self, Object, Other, one model is the triad A, book, B. Here the book is an object that brings together A and B in relationship. However, this model assumes that mutual recognisability is not essential to the definition of a relationship: While the reader of a book is able to recognise a character as a person, the character in the book is not able to recognise the reader. An alternative and more satisfactory model is B, A, B, in which the Self has a reflexive relationship with itself, using character A in the book as an object around which B forms a relationship with B’s internal model of A. This also makes sense from a cognitive modelling point of view, because the reader is actually forming a relationship with its own mental model of the book character. This is true even in relationships between two humans A and B, in that A is actually forming a relationship with A’s mental model of B. Relationship triads can be placed on a continuum relative to other triads (Sect. 3.3). The relationship triad involving a reader and a character in a novel can be represented as a point near one extreme on the continuum of relationality. A friendship between two living proximal humans is probably the most commonly experienced and studied relationship. On a relationship triad map, such friendships will cluster together at some distance from other more remote triads such as the reader and a character in a novel. Other kinds of relationships will cluster at an intermediate point. For example, humans have relationships with their pets, in which a dog and its human keeper both have some degree of performativity and mutual recognisability, although the human’s assumptions about the dog’s performativity and recognisability may be marked by projection and over-interpretation of the dog’s behaviour. Furthermore, canine cognitive processes around performativity and recognisability are not fully understood. However, it is fair to point out that relationships between humans are also marked by projection and over-interpretation, and that human cognitive process are not fully understood. And yet, despite these issues, it is intuitively clear that the human-dog relationship falls somewhere between a human-human relationship and a reader-character relationship. The dog, unlike a character in a book, is a tangible entity capable of biological and physical activity outside its keeper’s internal model. Apart from the printed word, a character in a novel does not have any performativity

36

3 Personhood and Relationality

outside the reader’s internal model. The human-dog relationship will also involve social objects such as food and toys, so a triad representation of one relationship involving food might be keeper, food, dog.

3.5 Spiritual Relationships It is possible to consider relationships that go beyond the common relationships between humans and their pets, and even beyond humans and tangible entities. For centuries, some relationships have been described as spiritual relationships. Two kinds of entities may participate in the traditional view of spiritual relationships: Human persons, and entities taken to be imaginary or supernatural. The relationships called spiritual relationships may consist entirely of imaginary/supernatural entities, such as relationships between the deities in a mythological pantheon, or between imaginary/supernatural entities and humans, or between humans exclusively. While imaginary/supernatural entities may not have a physical presence in the same way as humans, they can be imagined to have an implied presence. Therefore, relationships can be further generalised to include non-human imaginary entities. As Robin Dunbar has pointed out [6], feelings aroused in religious contexts are similar to those associated with intense romantic relationships. Throughout history, mystics say they are ‘in love with God’, for example John van Ruysbroeck (1293– 1381), Julian of Norwich (1343–c1416), Teresa of Ávila (1515–1582), Thérèse of Lisieux (1873–1897). Standard Christian hymns express the idea of Jesus as a friend, for example, ‘What a friend we have in Jesus’ (Joseph Scriven, 1855) and ‘I’ve found a friend in Jesus’ (Charles W. Fry, 1881). Similar sentiments are expressed in other religions. The point here is not only that friendship bonds with implied entities can be experienced, social communities are formed by religious practice that provide a context for spiritual relationships. As suggested in Sect. 3.2, a key characteristic that enables participation in any relationship is personhood, and the key cognitive process needed for this participation is personification, the attribution of personhood to events and entities. Anthropomorphism, from which personification derives, is thought to be an innate tendency in humans for the purposes of explaining the actions of others and to connect socially with others, but the use of anthropomorphic language in science has traditionally been suspected of a lack of objectivity, or credulousness, or leading to inaccurate conclusions. The approach taken here is to accept anthropomorphism and personification as one of the many meaning-making cognitive processes, and as such is not like a logical deduction that is required to return objective or accurate conclusions. Here, meaning making is not considered to be necessarily enabled by a deductive algorithm for solving problems within a truth-theoretic framework, a point that particularly needs to be considered when discussing spiritual relationships, which are partly or wholly intangible, imaginary, creative, and transcendent of material concerns.

3.5 Spiritual Relationships

37

Some relationships between humans are also described as spiritual relationships, for example in the treatise Spiritual Friendship by Aelred of Rievaulx circa 1160ce [7], because of the particular concerns around which the relationship is defined. Aelred wrote from a monastic context within which the idea of spiritual friendship is to be understood as formed between two same-sex celibates. While this is not taken to be a basis for modelling android relationality, it is interesting to note that the concept of a spiritual relationship has deep roots in human relationality, and Aelred’s work is a touchstone in current thinking within the community that explores same-sex attracted relationships [4]. Traditionally, spiritual relationships involve humans, whether as direct participants with each other or supernatural entities, or as observers of a divine pantheon or a ‘divine economy’. In the spiritual relationships that involve intangible and imagined entities, some of the concerns are held to be greater than those achievable by a human. For example, imagined deities may be capable of superhuman powers and may have attributes such as omniscience, omnipotence and eternal existence, but this does not prevent humans from engaging in some types of relationships with them. However, to model the diverse range of relationships, it is necessary to make two simplifications and increase generality. First, relationships are considered to take place between persons, which may be human or non-human. Androids and imaginary/supernatural entities are included as persons, because they can be personalised by an appropriate cognitive process. Such a process is innate in humans, and would be programmed in an android. Second, the triad model is generalised so that in some cases, which can be termed reflexive, the Self may be the same person as the Other, and that the nature of the Object is not restricted to physical objects or shared concepts. Several examples are as follows. • Relationships between deities in the Greek mythological pantheon. Such relationships usually involve entities with superhuman capabilities, and often portray a moral lesson or origin story. • In Trinitarian Christianity, paraklesis, in which the Holy Spirit is understood as a paraklete (advocate, guide) for humans. The ‘economy’ of the persons of the Holy Trinity is another example. • Spiritual friendship in the original sense of Aelred, which implies a friendship between same-sex celibates whose relationship is founded upon a mutual devotion to Christ. • Self-Transcendence, as an activity or event that involves the Self in reflexive relationship with itself, around a social object that could be a defined spiritual practice or an imagined entity such as a supernatural being. • Spiritual practices that involve the Self with an Other understood to be a supernatural being around a social Object such as a set of propositions or a meaning felt by the Self. • The spiritual relationship that develops between the android Klara and her personification of the Sun, as described in the novel Klara and the Sun [15]. • The close and long-term relationship that sometimes exists between a bereaved person and a deceased loved one.

38

3 Personhood and Relationality

The point of these examples is not to claim that such relationships are in every way comparable to a relationship between living proximal humans, although they exist on the continuum of relationality, but to illustrate the diversity of the most general types of relationships, which are termed spiritual relationships, commonly engaged in by humans. Any theory of the information-processing requirements of personification and relationality needs to take this diversity into account. The reason for doing this here is not to pursue a theological understanding, but to ensure that any computational model has sufficient coverage of the human condition. The reason for including spiritual relationships in the model is because human experience of such relationships suggest a connection at a level deeper than the mere affinity with shared social objects. It is a deeper connection that is difficult to represent using the terms of the ten values of the Schwartz theory of basic values (Sect. 2.2), but agents who are motivated to seek spiritual connection might be associated with a high degree of both universalism and need for security. The question can be debated of how much influence tradition has on the disposition of those seeking spiritual connection. Some spiritual practices rely heavily upon the importance of tradition to inform the practice, while others reject historical tradition in favour of novel expressions of spiritual practice. In both cases tradition could be considered an important influence, if only to be accepted by one group and rejected by another. The rejection of a concern does not imply that the concern is not significant, but that it is of sufficient significance to an individual, that the individual needs to reject it.

3.6 Formation of the Relationship We model the formation of a relationship between persons as an exchange of information extended in time, during which two autonomous agents approach, appraise, and form a bond with each other. For the purposes of a computational model, we will unpack this three-step process between agents A and B as follows. This is not a model of human relationship, but is a model that represents the information involved with relationship forming in general. Relationship forming between real agents, whether human or android, will include other factors specific to circumstances. Relationship forming between a human A and an imaginary entity B may use this same basic mechanism, but in these cases, the actions of B are imagined by A in a way that depends upon A’s internal model of B. 1. The Approach. Agents A and B come into proximity with one another. The physical distance is such that agents can form an initial impression. In humans, factors such as eye contact and physical appearance can be important, and initial clues to affection or danger may be involved. A decision is made by one agent to offer information to the other, or to break off the approach. By convention, agent A is the first to initiate communication. If A and B are already in a relationship, then there is no need to continue this process, as a bond has already been established. If A and B have been in a relationship that is now dissolved, whether to proceed with

3.7 Summary

39

this process will depend upon the dispositions of A and B. If A has previously attempted to form a relationship with B but has been refused, In the implemented simulation, the attitude forgiving and the basic value benevolence are among the concerns that influence the disposition to re-form a relationship. 2. The Appraisal. Agent A will make an appraisal of agent B. Agent A will compare the social objects in its value system with any social objects of B that can be discerned. In the simulation, the only thing A can know at this stage is A’s accept aspect of its favourite-colour, B’s precept aspect of its favourite-colour, and B’s importance parameter of this social object. Agents can also ‘read’ each other’s displayed flags that can inform about an agent’s allegiance to a particular cause or issue. If A’s appraisal of B is favourable, then A proceeds with a request to B. B conducts an appraisal of A to decide whether to accept or reject the request. If B rejects the request, then the process concludes. It is important that A and B both record in their own memories the fact that a request is rejected. Otherwise, A and B would repeatedly attempt to form the relationship oblivious to the fact that an attempt was previously made. 3. The Bonding. If B’s reply to A’s request is affirmative, then a bond is formed. A and B both store a memory of the connection, and the internal economies (Chap. 6) of the agents are updated to reflect the reward of bonding. The agent-based simulation uses a message-passing protocol to simulate the exchange of requests and replies between agents, as described in Chap. 4. The narrative that agents use to implement the friend formation process is given in Chap. 7. A relationship has a life-cycle such that after the bond is formed, dispositions of the agents change as the result of events in which they are involved. It is possible that a bond can be dissolved, depending on factors such as proximity during the relationship and significant changes in disposition. When a bond is dissolved, the internal economies of the agents are changed to reflect an increase of stress, and either a reward or a loss of reward depending on disposition. It might seem odd to think that the dissolution of a relationship can be rewarding, but this can happen to an agent that holds a very high degree of the basic value self- direction and a relatively low degree of need for the basic value security. It is important that A and B both store in their own memories the fact that a bond is dissolved. Otherwise, A and B would repeatedly attempt to form the relationship oblivious to the fact that a relationship had previously been formed and dissolved. Whether a reconciliation is possible depends on details of the agents’ dispositions that are discussed later.

3.7 Summary This chapter has introduced terminology for describing persons and their relationships that sets the framework for the computational model of Part II. Personhood and relationality are activities that involve the processing and exchange of information. Personhood is not an ontological category, but is a performance manifested

40

3 Personhood and Relationality

only within the context of a relationship. Persons and relationships are marked by performativity and recognisability. Relationships form around social objects within a social context. The idea of social object has been augmented to include a topic, a set of attributes, two aspects (precept and accept), and three parameters similar to those when modelling significant concerns: Importance, degree, and intensity. Relationality is generalised to include relationships between any entities, whether human or non-human, real or imagined, alive or dead. What can be called a spiritual relationship is the most general manifestation of relationality. One model of relationship formation is given by a basic three-step process of approach, appraisal, and bonding.

References 1. Airenti, G.: The development of anthropomorphism in interaction: intersubjectivity, imagination, and theory of mind. Front. Psychol. 9(2136), 1–13 (2018) 2. Allport, G.W.: The historical background of social psychology. In: Lindzey, G. (ed.) Handbook of Social Psychology, vol. 1, pp. 3–56. Addison-Wesley, New York (1954) 3. Barresi, J.: On building a person: benchmarks for robotic personhood. J. Exp. Theor. Artif. Intell. 32(4), 581–600 (2020) 4. Bennett, J.M.: Friendship: same-sex attracted, single, and aelred of rievaulx. In: Singleness and the Church. A New Theology of the Single Life. Oxford Academic, New York (2017) 5. Dorobantu, M.: Artificial Intelligence and the Image of God: Are We More than Intelligent Machines? Cambridge University Press, in press 6. Dunbar, R.: The Science of Love and Betrayal. Little, Brown UK (2016) 7. Dutton, M.L.: Aelred of Rievaulx: Spiritual Friendship. Cistercian Publications (2010) 8. Epley, N., Schroeder, J., Waytz, A.: Motivated mind perception: treating pets as people and people as animals. In: Gervais, S. (ed.) Nebraska Symposium on Motivation, vol. 60, pp. 127–152. Springer, New York (2013) 9. Epley, N., Waytz, A., Cacioppo, J.T.: On seeing human: a three-factor theory of anthropomorphism. Psychol. Rev. 114(864–886) (2007) 10. Gibson, J.J.: The Ecological Approach to Visual Perception. Psychology Press (1979) 11. Goffman, E.: The Presentation of Self in Everyday Life. Doubleday (1959) 12. Granger, G.: Objectivity of colour preferences. Nature 170, 778–780 (1952) 13. Hill, J.B.: The legacy of Luna: the story of a tree, a woman, and the struggle to save the Redwoods. HarperCollins, New York (2000) 14. Hindriks, F.: How social objects (fail to) function. J. Soc. Philos. 51(3), 483–499 (2020) 15. Ishiguro, K.: Klara and the sun. Faber and Faber (2021) 16. Johnson, K.A., Cohen, A.B., Neel, R., Berlin, A., Homa, D.: Fuzzy people: the roles of kinship, essence, and sociability in the attribution of personhood to nonliving, nonhuman agents. Psychol. Relig. Spiritual. 7(4), 295–305 (2015) 17. Kellerman, B.: What every leader needs to know about followers. Harvard Busin. Rev. 85(12), 84–91, 145 (2007) 18. Kwan, V.S., Fiske, S.T.: Missing links in social cognition: the continuum from nonhuman agents to dehumanized humans. Soc. Cogn. 26, 125–128 (2008) 19. MacDorman, K.F., Cowley, S.J.: Long-term relationships as a benchmark for robot personhood. In: Proceedings of the 15th IEEE International Symposium on Robot and Human Interactive Communication, pp. 378–383. University of Hertfordshire, Hatfield (2006) 20. Martin, J., Bickhard, M. (eds.): The Psychology of Personhood: Philosophical, Historical, Social-developmental, and Narrative Perspectives. Cambridge University Press, Cambridge (2012)

References

41

21. Moscovici, S.: Psychologie Sociale. PUF, Paris (1984) 22. Palmer, S.E., Schloss, K.B.: An ecological valence theory of human color preference. Proc. Natl. Acad. Sci. 107(19), 8877–8882 (2010) 23. Turner, L.: Will we know them when we meet them? From human cyborgs to non-human persons. Zygon, in press 24. Uchida, T., Minato, T., Nakamura, Y., Yoshikawa, Y., Ishiguro, H.: Female-type android’s drive to quickly understand a user’s concept of preferences stimulates dialogue satisfaction: Dialogue strategies for modeling user’s concept of preferences. Int. J. Soc. Robot. 13, 1499–1516 (2021) 25. Waytz, A., Cacioppo, J., Epley, N.: Who sees human? the stability and importance of individual differences in anthropomorphism. Perspect. Psychol. Sci. 5(3), 219–232 (2014)

Part II

The Affinity System

The following chapters give an overview of the implementation of a computational model called Affinity. The model takes the form of a simulation of a population of agents that form, maintain, and break relationships with each other. While this model has been informed by some aspects of human behaviour, it is not intended to represent a model of human personhood and relationality. Any model is a simplification that abstracts away from details of real life. These details are not unimportant, but they are not included in this particular model. The aim of this model is to explore possible reasoning processes of an android, and by doing so, to also explore a wider understanding of human persons. Implementation details and some novel control structures are introduced.

Chapter 4

The Computational Model

Abstract This chapter introduces Affinity, a computational agent-based simulation model implemented in Java, that simulates a community of agents that hold certain values concerning social objects and other concerns. Agents form different types of relationships with each other: Friendship, care giving/receiving, and leader-follower. These relationships are informed by some aspects of human personhood and relationality, but are not exact models of human personhood and relationality. They simplify and abstract away from some important details of real human relationships in order to expose some of the information-processing principles that may underly relationships in general. The following chapters provide more discussion of specific features of Affinity.

4.1 Introduction Affinity is the name of a computer program for simulating a community of agents who form relationships with each other. Agents hold a set of values that influences their behaviour, and they may engage in relationships with other agents. The behaviour of agents is driven by a set of narratives internal to agents. Narratives can be compared to small algorithms with instructions taken from a set of four instructions representing a reduced control structure. Affinity has a graphical user interface that depicts the simulated world on the user’s display screen and provides for monitoring parameters of the population and of individual agents. The simulated world is the interior of a closed box in which physical laws of motion are simulated. Agents are represented as coloured spheres that have a number of simulated physical attributes such as size, mass and velocity. Agents can also communicate with other agents. Agents propel themselves with a velocity governed by their current disposition. Laws of motion and constants such as gravity and friction enable the agents to bounce off the walls and floor of the box. Agents may come into proximity, and may stay together or bounce off each other depending on internal values and concerns. The speed with which an agent moves has been set to be slow enough that the user can follow what is happening in © The Author(s), under exclusive license to Springer Nature Switzerland AG 2024 W. F. Clocksin, Computational Modelling of Robot Personhood and Relationality, SpringerBriefs in Computer Science, https://doi.org/10.1007/978-3-031-44159-2_4

45

46

4 The Computational Model

the simulation, but fast enough that interesting behaviour can unfold within about a minute. In practice this means that an agent at full speed can traverse the entire width of the environment in about 5 s (real clock time). Affinity can accommodate a population up to 1000 or more agents, but typical experiments so far have used a population from 30-50 for ease of following the simulation visually. One agent is a Sun, which traverses the ceiling from corner to opposite corner during a simulated daytime lasting one minute (in real clock time). Energy is emitted by the Sun, which is collected by agents to increase their internal energy levels. Daytime is followed by a simulated night-time lasting one minute (real clock time), during which entities do not gain energy, but are visible to the user. Affinity implements an event loop, with which the individual states of each entity are updated, with a cycle time (or tick) of 0.1 s (real clock time). The behaviour of agents is determined by a collection of subsystems within the cognitive system of each agent as illustrated in Fig. 4.1. The cognitive system has three main subsystems that determine the disposition of the agent at any given time: 1. A value memory for storing and recalling connections, social objects, basic values and attitudes. 2. An economy that stores and samples the current values of the elements of the internal economy (Chap. 6). 3. A set of narratives that govern the step-by-step sequences that determine the ways in which agents read input from sensors, make decisions and take actions, and update various quantities in the disposition. Narratives are introduced in Chap. 7. In addition, input from the environment comes through sensors and actions are taken through the effectors. Affinity implements simplified models of relationships including friendship and care giving/receiving. The specific behaviour of an entity is governed by the narratives about these relationships together with a set of concerns that have been assigned to each entity. The particular dynamics of these relationships and how they unfold is influenced by the value states that have been assigned randomly to the values held by each entity.

4.2 Environment The environment is the interior of a box that has walls, a floor, and a ceiling, as shown in Fig. 4.2. Within the environment, agents , as fully described in Sect. 4.3, are shown as coloured spheres. They have a flagpole, which is a vertical post. Agents may display one of several flags attached to their flagpole to indicate an allegiance to particular issues, and agents may also display a sign on top of the flagpole to communicate with other agents. Agents are able to move in 3D space according to laws of motion. Each agent i has a radius ri , position pi , mass m i and velocity vi . At the start of the simulation, each agent is assigned a random radius within a limited range, then the agent’s mass m i is

4.2 Environment

47

Sensors Flags and Signs of others Agent Recognition Messages from others Proximity to others

Narratives

Economy

Value Memory Connections

Energy

Social Objects

Attach

Basic Values

Stress

Attitudes

Reward

E ectors Flags Signs Messages Movement

Fig. 4.1 Illustrative diagram of the cognitive system of an agent. Narratives are depicted as a sequence of states connected by lines, where time runs from left to right. The Narrative subsystem may contain any number of narratives with any number of steps. In practice, up to ten narratives are used ranging from two to eight steps. The Value Memory is depicted as a set of lists containing individual states of each concern (social object, basic value, attitude), and connections (other agents that are friends, ex-friends, refused friends, leaders, followers) that pertain to this agent’s experience. The Economy is depicted as a set of normal distributions to illustrate that elements of the economy, like most other numerical quantities in Affinity, are represented as random variates sampled from a distribution. All the terms in this illustration are defined in the text

48

4 The Computational Model

Fig. 4.2 Environment with 40 agents. At this point in the simulation, agents are moving on the floor, and some have formed friendships indicated by the linkages connecting the agents

set as m i = 43 πri 3 . Agents are then assigned a random position somewhere inside the environment and a random velocity that depends upon its current disposition. The environment has a force of gravity that acts upon each agent. The force of gravity has been chosen so that agents will slowly descend to the floor during the first 100 ticks of the simulation. While agents spend most of their time on the floor, the low gravity enables them to move upwards as well as sideways if velocity permits. When agents collide with a boundary of the environment (walls, floor, ceiling), the velocity is reversed and a damping factor is applied. Like the force of gravity, the damping factor is a constant of the environment. When agents i and j collide with each other, the standard elastic collision law (Eq. 4.1, where angle brackets denote the inner product) is used to update the velocities of the agents. If an agent moves on the floor, its velocity is also affected by a friction force. The constants friction, gravity and damping are the only ‘constants of nature’ defined for the environment.   vi − v j , pi − p j 2m j = vi − (pi − p j ) mi + m j ||pi − p j ||2   v j − vi , p j − pi 2m i  vj = vj − (p j − pi ) mi + m j ||p j − pi ||2 vi

(4.1)

4.3 Agents

49

4.3 Agents Agents are represented in the simulation as coloured spheres that have mass and velocity. The colour of the spherical body is determined by the percept aspect of the agent’s social object topic favourite-colour. Agents also have a flagpole, which is a vertical post coming from the top of the sphere. The flagpole is used to display flags, and agents may also display a sign on top of the flagpole (Figure 4.3). When discussed in text, signs will be named in text enclosed within a box, for example Need Care and Carer shown in Fig. 4.3. Agents are able to communicate with each another in four ways: 1. Each agent can sense the colour of another agent’s body. 2. Each agent may display one or more coloured flags that are attached to the flagpole. Other agents can sense the colour of the flag, and each colour of flag has a specific

Fig. 4.3 Agents have a spherical body and a flagpole on which signs and flags may be displayed. Signs and flags are used to communicate with other agents and also to indicate parts of the agent’s disposition to the user

50

4 The Computational Model

meaning that agents recognise. Usually, flags indicate an affinity for a particular issue, which is a type of social object. 3. Each agent may display a sign on top of the flagpole. There are two signs in use: A purple cube indicating that that agent is in need of care, and a cube with a red cross (the internationally recognised sign of medical care) on each face indicating that the agent is ready to offer care. 4. Agents may send simple messages to each other. There is a basic message-passing protocol that agents can use, for example, to ask another agent for friendship and to receive Yes or No replies. Whether agents communicate with each other, and which method they use, is determined by the narratives (Chap. 7) that determine the behaviour of the agent based upon its disposition. An important part of the disposition of each agent is its internal economy, introduced in Chap. 6.

4.3.1 Message Passing Affinity implements a simple message-passing protocol using the standard Mediator pattern [1]. One example of its use is during the narrative for forming a relationship (Sect. 7.3.3). Consider the relationship formation process of Sect. 3.6, consisting of three steps, approach, appraisal, bonding. Figure 4.4 shows a sequence diagram that illustrates how message-passing is used between agents A and B when A initiates the process.

4.3.2 Value Memory The cognitive system of each agent has a value memory, which contains the states of basic values, attitudes, and social objects represented within each agent. The state of the value memory at any given time is part of the disposition of the agent, which partly determines what actions the agent will be motivated to take. For example, agents will be motivated to move towards potential friends, and to move in a way that maintains existing friendships. Basic values influencing friendmaking include security, benevolence and universalism. The precise velocity (speed and direction) the agent takes will depend on a number of factors in its disposition. Agents with a high (low) degree of the basic value self- direction will have a higher (lower) velocity. Velocity also depends upon the intensity parameter of the basic value security if they are moving toward a potential friend. The precise details of how the disposition influences velocity are not important here except to note that at each clock tick, the velocity of an agent is updated from its disposition. The value memory contains states of the social objects held by the agent. In the current implementation, social objects include favourite- colour, issue, and

4.3 Agents

51

Agent A

Mediator

Time

Agent B

Poll messages for B Appraisal of B

Be my friend?

Appraisal of A

Poll reply from A Reply

Form bond or record B rejects A

Form bond or record B rejects A

Fig. 4.4 Sequence diagram depicting the message passing for the relationship forming process. The action taken by the agents depends upon the reply from agent B. If Agent B replies YES, then both agents update their cognitive systems to form a bond. If Agent B replies NO, then both agents update their cognitive systems to record that B has refused A. In both cases, the internal economies of A and B are updated depending on their individual dispositions

identity. An issue is a kind of social object with an attribute representing the particular issue. The state parameters are used to specify how much the agent reports its allegiance to the issue (importance), the degree to which the agent makes decisions based on its allegiance (degree), and the amount of physical movement associated with allegiance to the issue (intensity). In the implementation there is no limit to the number of issues, but in practice there is a fixed number of issues identified by colours, such as green and yellow. If the degree of allegiance to an issue is sufficiently high, the agent displays a flag of that colour among its flags. Each identity held by an agent is defined by an attribute. For example, an agent may identify as a follower on a given issue. This implementation of identity has two consequences for modelling. First, it commits to the idea that identity is not an ontological given, but is a social object and therefore a social construction. Second, it commits to the idea of multiple identities: The same agent could be a follower of one issue and a leader of another issue, and each identity contributes to the disposition of the agent in relation to the given issue.

52

4 The Computational Model

4.3.3 Connections Agents can form connections of various types that represent what kind of relationship they are in with another agent. When agent A forms a friendship with another agent B, a new connection is made between A and B. On every tick, the information in this connection is updated with the age (in ticks) of the connection, and the absence of the relationship, defined as the total number of ticks for which the distance between agents A and B has been too great. This distance depends upon the disposition of agents, typically 100 distance units. This is a way of representing the closeness of the relationship, using physical distance in the environment as a proxy for closeness. The age and absence values are used in the narrative for maintaining friendships (Sect. 7.3.5). Other types of relationships are also recorded in the connections list of an agent. For example, Leader and Follower are roles or identities within a leader-follower relationship that may be established as a result of certain dispositions and circumstances. An agent also maintains a list of other agents to whom it is offering care. An agent also maintains a list of ex-connections, that include ex-friends, exleaders, and agents to which it offered care in the past. This list is used to check whether the agent has had a previous connection with another agent that was subsequently dissolved. For example, depending upon disposition, the agent may or may not form a new friendship with an ex-friend. Here the levels of attitude forgiving and the value benevolence are used to decide whether the agent has the disposition to re-friend a former friend.

4.4 User Interface The user interface consists of a control panel (Fig. 4.5) containing buttons to start, pause, resume, and stop the simulation, and buttons for utility commands such as printing results and performing an analysis using non-negative matrix factorisation (NMF) of the value systems of the total population. There are buttons to initialise the simulation to one of four scenarios. Scenarios are user-specified to provide particular initial conditions for the environment and the dispositions of agents. The control panel also displays historical plots of the population, the total length of friendships or ‘termness’, and the average of each economy element over all agents (Chap. 6). When an agent is selected by the user by clicking on it, the user interface displays the internal disposition of that agent. This user interface also includes a mimic diagram for viewing the current execution state of all the narratives for the selected agent. The disposition display (Fig. 4.6) includes a dump of the value memory and a historical plot of the economy of the selected agent. It is useful to observe that while the overall economy plot (Fig. 4.5), which is averaged over the population, tends to be smoothly varying, the historical plot of the economy of the selected agent may fluctuate wildly. The mimic diagram is illustrated in Sect. 7.5.

4.4 User Interface

53

Fig. 4.5 Control panel of affinity

The friendship plot has two lines labelled ‘Termness’ and ‘|Friends|’. The ‘|Friends|’ plot indicates the total number of friendships that have formed at a particular point in the simulation run. The simulation begins with no friendships, but friendships soon form as agents seek friends and are able to move closer to each other. In Fig. 4.5, it can be seen that the number of friendships fluctuates, and this happens because friendships can be dissolved under certain circumstances, and agents may encounter new friends, and/or if suitably disposed, to re-friend ex-friends.

54

4 The Computational Model

Fig. 4.6 Disposition display for Agent A20. The display includes a scrollable window containing a dump of the current value memory contents in human-readable form, a historical plot of the elements of A20’s internal economy, and a list of A20’s connections including an agent (A22) who had previously refused A20’s offer to be friends. The percentages under friends A27, A15 and A38 indicate the proportion of the lifetime of the friendship with A20 that they have remained in close contact with A20. The row of numbers under each agent are the current values of the four elements of its economy rounded to one decimal place

The ‘Termness’ plot indicates the average length of friendships over all agents. The termness value for each agent is defined as (age − absence)/age (Sect. 4.3.3), which ranges between 0.0 (meaning the friendship has never been close) to 1.0 (meaning the friendship has never experienced absences).

4.5 Event Loop The event loop is the main controller of the simulation. A simplified flowchart for the event loop is illustrated as shown in Fig. 4.7. The timebase of the simulation is the tick, which happens every 0.1 s (real clock time). The implementation has been designed so that the processing necessary for one iteration of the event loop may be completed within one tick.

4.5 Event Loop Fig. 4.7 Flowchart depicting the event loop of affinity

55

56

4 The Computational Model

4.6 Summary This chapter has introduced Affinity, a computational agent-based simulation model implemented in Java, that simulates a community of agents that hold certain values concerning social objects and other concerns. A population of up to 1000 agents exist and move about within a box. The user can see the activity of agents, which move according to their dispositions (internal economies, value systems, narratives), and standard laws of motion.

Reference 1. Gamma, E., Helm, R., Johnson, R., Vlissides, J.: Elements of Reusable Object-Oriented Software, Design Patterns. Addison Wesley (1994)

Chapter 5

Modelling Concerns

Abstract This chapter explains how concerns are modelled in Affinity. There is a unique representation of numerical quantities in the form of distributions, where specific quantities are known only after the distribution is sampled for a random variate.

5.1 Introduction Three types of concerns are implemented in the model: Basic Values, Attitudes, and Social Objects. Each agent is equipped in its value memory with a set of concerns, which govern behaviour according to the current state of the concern, the circumstances facing it in the simulated world, and the current state of the agent’s economy (Sect. 6.1). As described in Chap. 2, concerns are abstract concepts relating to a specific topic. However, when an agent holds a concern, the information defining the concern is represented as a state variable. A state has up to five components: The concern itself, any attributes associated with this concern state, and three quantities called degree, importance, and intensity. These quantities correspond to the same-named ones in Sect. 2.2. The representation of quantities is covered next, followed by the representation of states and concerns.

5.2 Representing Quantities Quantities are usually thought of as being numerical quantities, for example, drawn from the integers or real numbers. In this work, we represent quantities as a random sample from a distribution. The sample is drawn from the distribution when it is needed, instead of being defined in advance. The advantage of this ‘late binding’ representation is that it reflects the stochasticity characteristic of natural systems, and that the domain and range of samples can be defined without fixing a predefined value. All quantities in the simulation are real numbers in the range 0.0–1.0. In this © The Author(s), under exclusive license to Springer Nature Switzerland AG 2024 W. F. Clocksin, Computational Modelling of Robot Personhood and Relationality, SpringerBriefs in Computer Science, https://doi.org/10.1007/978-3-031-44159-2_5

57

58

5 Modelling Concerns

Fig. 5.1 Normal distributions N (μ, σ ) are assigned randomly when an entity is initialised in the simulation. The distribution shown here is N (0.5, 0.1)

way the simulation employs only non-negative quantities that can be interpreted as analogous to probabilities if this is useful in context. Non-negative quantities are also useful because the results from a simulation run are analysed by a non-negative matrix factorisation algorithm (Chap. 8). We use a variant of the normal probability density function P(x) =

 2 1 2 √ e−(x−μ) 2σ σ 2π

(5.1)

in which the amplitude factor is changed to 1.0, giving Q(x) = e−(x−μ)

2

 2 2σ

(5.2)

because in this application we do not need the integral of the distribution to be 1.0. We can use Q as a membership function, where 0.0 ≤ Q(x) ≤ 1.0 for any x. We will refer to Eq. 5.2 as a normal distribution despite the altered definition, and we will notate a distribution with mean μ and standard deviation σ as N (μ, σ ). Figure 5.1 shows a graph of N (0.5, 0.1). When a random variate is sampled from a distribution, we say that a quantity has been instantiated from the distribution. If a quantity is instantiated from N (0.5, 0.1) for example, it is likely to be close to 0.5. If a smaller cluster of values is needed, the range of instantiated values can be reduced by reducing σ . If a tighter cluster of instantiated quantities centred on, say, 0.8 are needed, then the distribution N (0.8, 0.05) may be suitable. Instantiated quantities must be in the range [0.0, 1.0], so sampled random variates are clipped to within this range if necessary. It should be noted that this method of representing quantities is not dissimilar in intention to fuzzy sets [2] in which Eq. 5.2 is a membership function, and instantiation of quantities from the distribution is a method for ‘de-fuzzifying’ a fuzzy set to obtain

5.3 Comparing Uninstantiated Quantities

59

a ‘crisp’ value. However, our only purpose here is a method for instantiating quantities from a distribution in order to generate random variates, and we will not develop this into a system of logic.

5.3 Comparing Uninstantiated Quantities We can compare quantities that have not yet been instantiated by comparing their distributions. For example, suppose we wish to compare the intensity with which agents A and B hold the benevolence concern. Suppose we have • The intensity of A’s benevolence concern = N (0.6, 0.07) • The intensity of B’s benevolence concern = N (0.7, 0.06) we wish to find out how similar they hold this component of concern. We use a variant of the statistical Z-test to compare the similarity of two distributions. Given distributions N1 (μ1 , σ1 ) and N2 (μ2 , σ2 ), we define μ1 − μ2 z= 2 σ1 2 + σ22 2

(5.3)

where the result z is in units of standard deviations. We use the following qualitative scheme to describe the similarity of two distributions: ⎧ similar ⎪ ⎪ ⎪ ⎨marginal ⎪ different ⎪ ⎪ ⎩ separated

z < 2.0 2.0 ≤ z < 2.5 2.5 ≤ z < 3.0 z ≥ 3.0

For example, the two distributions shown in Fig. 5.2 are N (0.6, 0.07) and N (0.7, 0.06), and the result according to Eq. 5.3 to three significant figures is z = 2.17, which indicates a marginal difference between the two distributions. Quantities can also be used to express values with multiple components. For example, a distribution p of points in a two-dimensional space can be represented as p = N (0.6, 0.01), N (0.5, 0.01). In Affinity, colours are represented as points in a conventional three-dimensional RGB colour space. For example, N (0.9, 0.01), N (0.1, 0.01), N (0.1, 0.01) would represent a distribution containing the reddish colours.

60

5 Modelling Concerns

Fig. 5.2 N (0.6, 0.07) and N (0.7, 0.06)

5.4 States Each agent has a value system, which can be considered the collection of states that hold information in the agent’s ‘brain’. States are used in representing ‘instances’ of concerns. Concerns are considered to be abstract concepts, but when an agent ‘holds’ a concern in its own value system, we say the concern is instantiated, and a state is created to represent the concern along with an (optional) attribute and three quantities describing how the concern is held. This is a different meaning of instantiation than when a random variate is sampled from a distribution representing a quantity (Sect. 5.2), but the word is so useful we will use it in both situations with no possibility of ambiguity. A state s is represented as the 5-tuple s = c, a, d, m, n, where c is the concern (defined below), a is an attribute relating to the concern, and d, m, n are each quantities representing the degree, importance and intensity respectively of this concern. Quantities d, m, n are represented as in Sect. 5.2, and can be notated in text as distributions N (μ, σ ) for some means μ and standard deviations σ . In the case where the attribute is not used, the 4-tuple s = c, d, m, n represents a state s without loss of generality.

5.5 Values As described in Chap. 2, values, meaning abstract concepts that people consider as important guides and motivations for behaviour, are taken from the Schwartz Theory of Basic Values [1]. This theory includes ten basic values: self-direction, stimulation, hedonism, achievement, power, security, conformity, tradition, benevolence, and universalism. Here we will notate the names of basic values in small capitals.

5.5 Values

61

Values influence all aspects of the behaviour of agents. For example, in the relationship of caregiving/receiving, Affinity uses two basic values: benevolence and universalism. Benevolence is used to govern an agent’s care for its in-group of friends, and universalism is used to govern an agent’s care for its out-group. The representation of a basic value is simply the name of the value. An example of a state (or ‘instance’) of a value for a given agent having an amount of benevolence is the expression 

benevolence, N (0.7, 0.2), N (0.8, 0.01), N (0.6, 0.1) ,

(5.4)

in which the which represents a greater than average willingness to offer care to a friend: degree N (0.7, 0.2), importance N (0.8, 0.01), and intensity N (0.6, 0.1). Normal distributions N (μ, σ ) are assigned randomly when an agent is initialised in the simulation. When a value is held by an agent and therefore instantiated to a state, the degree, importance and intensity of the state can describe a wide and subtle range of dispositions. For example, both leaders and followers (agents in a leader-follower relationship) will tend to hold a high importance to the basic value power. In such relationships, leaders hold that it is important that they have power, while followers hold that it is important for the leader to have power. However, leaders will act with strong motive force (the intensity component) for power in their expression of leadership. Followers may have a level of intensity that reflects how they follow: A passive follower will have low intensity of action, while an active follower (such as a campaigner) will have a high intensity of action. And, both leaders and followers will have different levels of degree depending on how their ‘passion’ for the relationship influences their decisions. For example, while all leaders claim high levels of importance for power and have higher than average motive force, one type of leader will demonstrate by their decisions that they have a passion for the role, while another type of leader may demonstrate by their decisions that they are less interested in the role. So, importance is about how the agent reports the value, degree is about how the agent uses the value in making decisions, and intensity is how the agent uses the value in actions. Among other values used by Affinity, the value security is used by Affinity during the process of friendship. An agent with a higher degree of security is more likely to form friendships; an agent with a higher level of intensity is more likely to move faster to look for friends and to move closer to friends. In the simulation, physical distance and movement are proxies for the closeness of a friendship. Among the ways that more subtle dispositions can be modelled, an agent with high level of importance for security but low on degree and intensity will be in the position that it is very important to have friends, but is unable to have them. This disposition of the state can influence the internal ‘economy’ of that particular agent (Chap. 6) to increase the level of stress and decrease the level of reward held by the agent. The self-direction value is relevant in a number of ways. It influences the velocity of agents in the environment, where a higher degree of self-direction

62

5 Modelling Concerns

leads to more exploratory movement, and it influences the process of friendship and care. Agents with a very high degree of self-direction are less likely to accept care when it is offered. And, leaders tend to have a high degree of self-direction.

5.6 Attitudes Attitudes simulate how agents evaluate other agents or social objects. Affinity currently defines four attitudes: forgiving, credible, sceptical, and loyal. For example, the attitude of forgiving governs the reconciliation narrative, which is executed when agents meet ex-friends and decide whether or not to re-friend a previous friend. An example state of this attitude is the expression  forgiving, N (0.1, 0.1), N (0.9, 0.1), N (0.9, 0.1) ,

(5.5)

which represents an agent that is very unforgiving because the degree of forgiveness tends around 0.1, and behaves with a strong motive force in its lack of forgiveness because the intensity tends around 0.9. The importance component (here tending around 0.9) is used when agent’s query each others’ opinions. The high quantity of importance represented here seems paradoxical because the actual degree of forgiveness is small. However, this is useful to represent a difference between what an agent claims and what the agent does. Among the attitudes used by Affinity, the attitude sceptical is to do with how an agent evaluates the opinions of others. Agents with a higher (lower) degree of sceptical are less (more) likely to become followers of other agents who are leaders. Another attitude is loyal, which helps to determine how far away agents move from their friends, and is also an attitude leaders look for in prospective followers. In return, prospective followers with a high (low) degree of sceptical look for leaders who claim a high (low) importance of the attitude of credible.

5.7 Social Objects Social objects, as introduced in Chap. 3, represent the topics about which agents may have opinions, affiliations, and preferences. Compatibility between social object states is one factor in forming relationships between entities. A social object o is represented in Affinity as the tuple o = t, a, where t is the name of a topic, and a is the aspect of this social object. There are two aspects, notated here in small capitals: precept and accept. The precept represents how the agent stands in relation to the social object’s topic, and the accept represents how the agent stands in relation to other agents with respect to this topic. Section 3.4 described the idea of categorising social objects in terms of their attributes. In Affinity, an attribute a is represented as the tuple a = n, q, where n

5.7 Social Objects

63

is the topic of the attribute (here notated in small capitals) and q is the quantity of the attribute. Attributes are independent concepts from social objects, which is why attributes are represented using states rather than incorporated into the definition of social objects. As an example of a state representing a social object with the topic favouritecolour and aspect precept, in which the attribute is the colour, let colour C be represented as C = N (μ R , σ R ), N (μG , σG )N (μ B , σ B ) for some RGB components with distributions as shown. 

 favourite − colour, precept , colour, C , degree, importance, intensity (5.6) For ease of explanation, the degree, importance and intensity of the state may be omitted in subsequent examples. Where a social object state is defined by the triple topic, aspect, attribute, an example of the social object states for an entity with favourite colour red but willing to accept friends that have a wider range of favourite colours is therefore expressed as a pair of states: 

 favourite − colour, precept , colour, N (1.0, 0.1), N (0.0, 0.1), N (0.0, 0.1)   favourite − colour, accept , colour, N (1.0, 1), N (0.6, 0.5), N (0.6, 0.5)

(5.7) where the degree, importance and intensity of the state have been omitted for clarity. In Affinity it is normally the case that a value system will include both accept and precept aspects of a social object, but it is not strictly necessary for both to be present. The absence of one simply means there is less information available when a decision is needed. Another example of a social object topic implemented in Affinity is an issue. An issue is a statement or proposition on which there is a degree of agreement by an agent. The representation of a state of an issue includes an attribute that names the topic of the issue together with a quantity that represents the amount of agreement to the issue: issue, aspect , topic, quantity , degree, importance, intensity .

(5.8)

For example, consider the statement ‘Swimming is enjoyable’. A person may not enjoy swimming, but can accept that other people enjoy it, so the aspects of precept and accept are relevant. This can be expressed as a pair of states:   issue, precept, swimming − enjoyable, N (0.0, 0.01)   issue, accept, swimming − enjoyable, N (0.9, 1.0)

(5.9)

where the degree, importance and intensity of the state have been omitted for clarity. Using comparison of distributions (Sect. 5.3), the issues held by agents can be compared for several reasons. One reason is to determine agreement, for example,

64

5 Modelling Concerns

whether two agents agree that swimming is enjoyable. This is a comparison between two states that represent the precept aspect. Another reason is to determine compatibility, which is a comparison between both the precept and accept aspects of a pair of agents. For compatibility, each agent needs to ‘accept’ the precept of the other. In this way it is possible to represent subtle relationships between two agents who ‘agree to disagree’ on a given issue, for example. The degree, importance and intensity of the state are also be used to represent ways in which an agent can hold an issue, for example, the agent who claims to feel strongly about an issue (importance) independent of their state of agreement with it (the quantity of the issue’s attribute).

5.8 Summary This chapter has introduced a formalism to describe the modelling of concerns in Affinity. The concerns described are basic values, attitudes and social objects. Quantities and states are introduced as a way of representing instantiations of concerns within the value system of an agent. Quantities are represented as normal distributions from which random variates are sampled at the time a quantity needs to be used. All samples, or instantiated quantities, are numbers in the range 0.0 to 1.0. These are therefore non-negative numbers that are useful in interpreting quantities as probabilities if needed, and in analysis of the states of populations using non-negative matrix factorisation (Chap. 8).

References 1. Schwartz, S.H.: An overview of the Schwartz theory of basic values. Online Read. Psychol. Cult. 2(1) (2012) 2. Zadeh, L.A.: Fuzzy sets. Inf. Control 8(3), 338–353 (1965)

Chapter 6

The Economy

Abstract This chapter describes the economy, which is a set of four elements that each agent has that regulates its behaviour. The four elements of the economy are analogous to four of the hormones in the human endocrine system, but the economy is not intended to be a simulation of the human endocrine system, and the elements of the economy are not intended to simulate human hormones.

6.1 The Economy Each simulated agent has a set of internal quantities that regulates its behaviour. This set of quantities is called the agent’s internal economy, and each of the quantities is called an element of the economy. Each element has a numerical value that ranges from 0.0, indicating the lowest possible value, to 1.0, the highest possible value. The elements of the economy change over time, depending on the circumstances of the agent, and they influence the agent’s behaviour and motivation to action. They are not unlike basic values (Sect. 2.2) in that they influence behaviour and motivation, however, basic values change much more slowly over the lifetime of the agent if at all. By contrast, the elements of the economy may change from one tick to the next depending on what is happening in the simulation. The analogy with humans would be the endocrine system, however, we stress that the economy is not intended to model the human endocrine system, and the elements are not intended to model hormones. The analogy is simply a heuristic to aid explanation of the concept of the economy. The four elements of each agent’s economy are notated in small capitals, and defined as follows: • energy: This quantity describes how much energy the agent has to spend on performing actions. The analogy with the human endocrine system would be the hormone adrenaline. © The Author(s), under exclusive license to Springer Nature Switzerland AG 2024 W. F. Clocksin, Computational Modelling of Robot Personhood and Relationality, SpringerBriefs in Computer Science, https://doi.org/10.1007/978-3-031-44159-2_6

65

66

6 The Economy

• reward: This quantity describes the level of reward that is experienced (or ‘felt’) by the agent. The analogy with the human endocrine system would be the hormone endorphin. • attach: This quantity describes the level of bondedness or attachment ‘felt’ by the agent. The analogy with the human endocrine system would be the hormone oxytocin. • stress: This quantity describes the level of stress ‘felt’ by the agent. The analogy with the human endocrine system would be the hormone cortisol. Again, the comparison with human hormones is intended to be merely an heuristic analogy. This is not intended to be a model of human hormones. For example, the hormone adrenaline has many effects upon the human body, such as speed of heartbeat and breathing, alertness and speed of digestion. Here in the model energy simply defines an ‘energy level’ of the agent. Examples of how the economy changes on each clock tick are as follows. Energy is gained by exposure to the simulated Sun and receiving care, and energy is lost by narratives (Chap. 7) such as movement and giving care. The caregiving/receiving and friendship relationships are sensitive to most elements of the economy. When agent A gives care to agent B, A receives an increase of reward and a decrease of energy. B receives an increase of energy and a reduction of stress. Caregiving takes place over time, so the elements are incremented and decremented at each clock tick until B no longer needs care. If a caregiver is no longer able to offer care because its energy has dropped below a threshold, then it cancels its care mission. When agent A befriends agent B, both A and B receive an increase of reward and attach, and a reduction of stress. The particular amounts by which elements of the economy are increased or decreased are given by parameters set in the simulation, and partly by the values and attitudes held by the agent. For example, an agent with a high degree of the basic value hedonism will receive a fractionally greater increase of reward from rewarding situations. These parameters are set manually, and can be changed by the user to provide the particular environment to study. There is no reason why a future implementation may not use trainable parameters, which are modified automatically as a simulation proceeds. If an agent has friends, it receives an increment of attach in proportion to the maximum connection affinity (set at connection time) over all connections. If an agent has no friends, stress is increased. Stress is also increased in proportion to the crowding experienced by an agent. Crowding is defined in the following way. Suppose agent A has a spherical body with radius r . Represent the neighbourhood of an agent at location p as the sphere Np centred on p with radius q being some multiple of r . Currently Affinity uses q = 3r . Then, count the number k of other agents that are enclosed within Np . Let crowding = k/M, where M is the maximum number of spheres of radius r that can be packed into Np . Using Kepler’s conjecture, M can be π q3 , so crowding varies between 0.0 when no other agents are approximated by 3√ 2 r3 present within the neighbourhood, to approximately 1.0 when the neighbourhood is completely full.

6.2 User Interface

67

6.2 User Interface The user interface of Affinity displays a plot of how the average amount of the economy has changed during a simulation. Figure 6.1 shows a plot of a typical simulation run with the four elements up to tick 431, where time is plotted from left to right (tick 1 is the left edge of the plot). Because each element of the economy ranges from 0.0 to 1.0, the average total level for each element is the sum of the levels of all agents for that element, divided by the number of agents. This gives a value between 0.0 and 1.0, which is plotted against the current tick. At the beginning of the simulation, the totals shown reflect the default amounts of each element that have been assigned to agents. However, as the simulation proceeds, the elements can be seen to change at each tick in response to circumstances. At the beginning of this simulation, energy is relatively high, but then it falls because agents are moving, and caregiving relationships have not yet started. The element reward slowly increases as agents begin to engage in rewarding activities. Because the element attach is related mainly to physical contact and friendship, it begins to climb as friendships are sought, then declines as friendships become longer term and there is less scope for pursuing new friendships. The element stress starts at the default level, then climbs as friendships are sought, possibly because of greater crowding as well, then steadily falls as caregiving proceeds and friendships have stabilised.

Average total level

1.0

Energy Reward Attach

Stress 0.0 1

Ticks

431

Fig. 6.1 Plot showing the four elements of the internal economy (energy, attach, stress, reward). At the time the plot is displayed, the simulation has run for 431 ticks

68

6 The Economy

This general pattern of trends on the elements is typical for simulation runs with the internal thresholds and increments set as they are. It is possible for the programmer to change the thresholds and increment constants, and different trends emerge. For example, if caregiving is not sufficiently effective in increasing energy, then all agents die within a few hundred ticks.

Chapter 7

Narratives

Abstract This chapter describes narratives, which are step-by-step sequences of instructions that determine the ways in which agents perform a sequence of actions and information exchanges extended in time. The narratives used by each agent are contained within the agent’s cognitive system.

7.1 Introduction The behaviour process of each entity in the simulation is governed by the execution of narratives. The cognitive system of each agent contains a set of narratives that are executed concurrently. In general there is no limit on the number of narratives an agent may have and the duration of each narrative. This chapter will describe the main narratives that an agent uses to move in the environment, and to engage in the relationships of friendship, caregiving/receiving, and leading-following. During each iteration of the Affinity’s event loop, one step of every narrative of every agent is executed by Affinity. A narrative is an ordered sequence of steps that defines an algorithm. A narrative has a first and a last step. The execution of a narrative begins at the first step. Each step of a narrative is a condition-action rule represented as a triple operation, condition, action. The operation is one of four operation codes or commands that cause the action to be executed if the Boolean condition is met. In the text, operations are written as boldface words, and conditions and actions are written in a san-serif face.

7.2 Operations The execution of a step is governed by the operation, which may cause the Boolean condition to be evaluated to true or false, and an action to be performed. Some operations do not need conditions or actions. Currently there are four possible operations, © The Author(s), under exclusive license to Springer Nature Switzerland AG 2024 W. F. Clocksin, Computational Modelling of Robot Personhood and Relationality, SpringerBriefs in Computer Science, https://doi.org/10.1007/978-3-031-44159-2_7

69

70

7 Narratives

and about a dozen of each built-in conditions and built-in actions. The conditions and actions are implemented as Java code in Affinity. A narrative has a small amount of local state it uses for processing each step, and some of the steps of a narrative can be performed in a special mode called a mission (Sect. 7.4). Operations have the following definitions: • if action: Perform the action unconditionally. Continue with next step at the next clock tick. • wait condition action: If the condition tests true, then perform the action and continue with next step at the next clock tick. Otherwise, if the condition tests false, then do not continue with the next step: At the next clock tick, the condition will be tested again. The effect of this is to wait until the condition is satisfied before performing the action. • while condition action: At each clock tick while the condition tests true, perform the action. Otherwise, if the condition tests false, then continue with the next step at the next clock tick. This has the same effect as the standard while control structure of programming languages. • loop: Go back to the first step of the narrative. If the last step of a narrative is not a loop operation, then the last step is executed and the narrative is discarded when the last step has finished. However, all narratives constructed so far end with a loop operation and are thereby repeated indefinitely.

7.3 Narratives This section describes the narratives that are contained in each agent’s cognitive system. In the text, narratives are named in italic typeface, and the steps of a narrative are written here as numbered lines. For syntactical clarity, conditions and actions are separated by the keyword then. The steps of the narrative are shown enclosed in a box headed by the name of the narrative.

7.3.1 Asking for Care An agent is in need of care when its disposition falls below certain thresholds. Typically this is when in its economy (Chap. 6) the energy level is too low and/or its stress level is too high. Whether a level is too low or too high depends on thresholds that are defined within its disposition. An agent that needs care will raise a sign to indicate to other agents that it needs care. On subsequent clock ticks, agents who are disposed to give care will move toward the needy agent, and will provide care under the right circumstances (Sect. 7.3.6). The Ask Care narrative has three steps as follows:

7.3 Narratives

71

Ask Care 1. wait Need-care then Raise-need-sign 2. wait Am-healthy then Lower-need-sign 3. loop The effect of this narrative is (Step 1) to wait until the entity’s energy level drops below thresholds found in the agent’s value system, and when it does, to raise its Need Care sign (Sect. 4.3). In the simulation this sign is displayed as a purple cube that is positioned above the entity. Then, (Step 2) this narrative waits until the energy level exceeds a threshold found in the value system, and when the threshold is exceeded, to lower the Need Care sign. It is important to note that all the narratives of an entity are executed concurrently, so other narratives will continue to operate while the entity is receiving care: A wait step causes only that step in the narrative to wait, not the whole agent. The conditions expressed here as Need-care and Amhealthy dispatch to code in Affinity that simply checks energy levels and compares with the thresholds. The actions that raise and lower the signs dispatches to code in Affinity that make appropriate changes in the entity’s state and the graphical user interface.

7.3.2 Motility Agents are able to move in the 3D space within the environment. Agents are motivated to move for the purpose of forming relationships in a manner that depends upon their disposition. The Motility narrative governs how an agent moves:

Motility 1. 2. 3. 4.

if Find-centre-compat-friends then Move-closer-point if Find-centre-friends then Move-closer-point if Find-near-compat then Move-closer-point loop

The effect of this narrative is repeatedly (Step 1) to move closer to the average location of its most compatible friends if any; (Step 2) to move closer to any other friends if any; and (Step 3) to move closer to any nearby agents that may be compatible if any. Compatibility is based on a comparison previously described between the agent’s and the other agent’s value systems. If one of the Find- conditions locates a suitable target location, then a move towards this location will take place in this clock tick. This is accomplished by adjusting the velocity of the agent according to the direction of the target and the disposition of the value system that governs how much the velocity is to be increased or decreased. The target location is stored in local state only within the scope of the if operation to enable the Move- action to be performed.

72

7 Narratives

Notice in this narrative the if operations are independent; There is no else control structure to specify exclusive choice as found in many programming languages. It is possible, for example, to make a movement towards a different destination during every clock tick if the condition in each step succeeds. This strategy provides for a wide range of motility, while simplifying the implementation so that the overheads to support the standard if…then…else control structure are not required. This lack of exclusive choice is a design decision based on a personal conjecture about computational limitations in biological systems.

7.3.3 Make Friends The Make Friends narrative, together with the Reply to Request narrative governs the process of gaining friendships. Here message-passing (Sect. 4.3.1) is used to ask an agent if they want to be a friend, and to reply to requests. The Make Friends narrative also uses a narrative pattern that we call a mission, which will be discussed in detail in Sect. 7.4. 1. 2. 3. 4. 5. 6. 7.

Make Friends wait Meet-proximal-compat then Start-friend-mission if Check-friend then Send-friend-message wait Poll-friend-replies then No-action if Message-yes then Gain-friend if Message-no then Record-friend-refusal if No-condition then End-mission loop

The agent will test the Meet-proximal-compat condition on each clock tick until another agent who is not yet a friend is close (depending upon disposition, but typically 2 radii distance from the agent) and also is compatible. Compatibility also depends upon disposition and upon the results of comparing value systems (Sect. 5.3). A ‘moral rule’ is built into this condition so that agents do not attempt to befriend an agent with whom they are giving or receiving care. If the proximal compatible nonfriend has been found, then the Start-friend-mission action will be executed to begin a mission. Let A stand for the agent whose narrative is executing this action, and B stand for the potential friend to be gained. In Step 2, the Check-friend condition first ensures that the limitation on the number of friendships A and B may have is not yet exceeded, and it ensures that A is not already in a caregiving/receiving relationship with B. If the condition fails, then the mission terminates. Otherwise agent A will send a message to agent B requesting friendship and wait for a reply (Step 3). Agent B’s Reply To Request narrative, which has been waiting for a friendship request message, will proceed and send a reply to A. The reply is received by A’a Pollfriend-replies and stored in a variable local to the mission. If the reply is affirmative, A’s Gain-friend action will be executed, and the mission will end. If the reply is

7.3 Narratives

73

negative, A’s Record-friend-refusal action will be executed, and the mission will end. During the Gain-friend action, agent B is added to the connections (Sect. 4.3.3) of agent A, and agent A is added as a connection of B. Finally, the dispositions of A and B are updated to increment or decrement the reward element of their economies. The amount of update depends on other factors in the disposition such as the degrees of basic values security and hedonism.

7.3.4 Reply to Request This narrative simply waits for a friend request to be received, stores the message in a variable local to the narrative, and replies in the affirmative or negative based on the waiting agent’s assessment of the sending agent.

Reply To Request 1. wait Poll-friend-requests then Reply-friend-message 2. loop In both Make Friends and Reply To Request , polling is considered an acceptable action because the narrative needs to wait. Only the calling narrative blocks because of its wait operation; All other narratives execute asynchronously.

7.3.5 Maintain Friendships The Maintain Friendships narrative is also very simple. It is merely a way to ensure that the Maintain-friends action is performed at each tick.

Maintain Friendships 1. if No-condition then Maintain-friends 2. loop The No-condition condition always tests true, which guarantees that the Maintainfriends action will be performed. The purpose of this action is to check whether the conditions of friendship still prevail. If circumstances are such that the friendship between agents A and B is to be dissolved, then the connection is marked for dissolution at the end of the event loop for the current clock tick so that all dissolutions take place at the same time. When dissolution takes place, the connection is deleted, the economies of both agents are updated. The ex-connection lists of A and B are updated so that A is recorded as an ex-connection of B, and B is recorded as an ex-connection of A.

74

7 Narratives

7.3.6 Give Friends Care Give Friends Care is one of the two caregiving narratives. The other one is Give Others Care . The two narratives are both about caregiving to needy agents, but having two narratives models the value system in which the basic value benevolence relates to caring for members of an agent’s in-group, and the basic value universalism relates to caring for members of the outgroup [1]. Depending upon the degree of these levels among others in an agent’s value system, the agent may or may not be disposed to offer care to friends, or to non-friends, or to both. Also, an agent in need of care may refuse to accept care if the degree of its self- direction value is too high. This is analogous to human situations where a person’s right to the refusal of care is founded upon one of the basic ethical principles of medicine, autonomy. Here the analogy to autonomy is not complete, as a high degree of self- direction (perhaps motivating ‘stubbornness’ or resistance to care) is only one factor in a real human situation. 1. 2. 3. 4. 5. 6. 7.

Give Friends Care wait Find-needy-friend then Start-care-mission if No-condition then Raise-carer-sign while Care-target-far then Move-to-target while Care-target-near then Give-care if No-condition then End-mission if No-condition then Lower-carer-sign loop

Starting with Step 1, this narrative waits until a needy friend can be found. The Find-needy-friend condition will succeed when a friend who has raised their Need Care sign is located within an observable distance (typically 10 distance units) away, and the friend has a disposition to accept care. If the condition succeeds, the agent raises its Carer sign (a red cross, the generic emblem for medical care), the needy friend is identified as a target needing care, and a mission to care for the friend is started with the objective that the target agent no longer requires care. The mission is specified in Steps 3–5. Pursuing the mission involves moving to the needy agent, and then giving care when the needy agent is proximal. The velocity of movement depends partly upon the amount of intensity of the agent’s benevolence value. The Give-care action causes the economies of the caregiving and the care needing agents to be updated by incremental amounts. Because Step 4 is controlled by a while operation, an increment of care will be given on each tick until the objective is achieved (the target agent is no longer in need of care) or until the target is not proximal. Precisely how much case is given depends on the disposition (in particular, the intensity parameter of the benevolence value) of the carer. When Step 5 is reached, the mission has ended. Then the agent can lower its Carer sign, and the narrative loops back to the beginning to find another needy friend.

7.3 Narratives

75

7.3.7 Give Others Care The Give Others Care narrative is identical to the Give Friends Care narrative except for Step 1, where the agent waits until a needy non-friend is located: 1. 2. 3. 4. 5. 6. 7.

Give Others Care wait Find-needy-other then Start-care-mission if No-condition then Raise-carer-sign while Care-target-far then Move-to-target while Care-target-near then Give-care if No-condition then End-mission if No-condition then Lower-carer-sign loop

The reason these are two separate narratives is because agents with a suitable disposition are able to provide care to friends and non-friends at the same time, as all narratives operate concurrently. And, some agents will have a disposition that makes them able to offer care to only friends, or only non-friends, or to no agents. Details of the disposition are also different, for example, here the velocity of movement to the needy agent depends partly upon the amount of intensity of the agent’s universalism value.

7.3.8 Find a Leader Some agents will be motivated to affiliate themselves with another agent because they agree on a particular issue (Sect. 5.7), and because the other agent has a suitable disposition as a leader. This is about forming a leader-follower relationship. A leaderfollower relationship has entirely different criteria than friendship for the formation, maintenance and dissolution of the relationship, but the narrative sequence is similar. Also, a leader does not need to be proximal, and a leader does not need to give permission to the follower. The point of comparison is the social object of a particular issue. A leader with strong intensity for the issue will ‘project’ simulated influence that can be identified by other agents, analogous to using broadcast media, so close proximity is not necessary:

Find a Leader 1. wait Identify-leader then Follow 2. loop In Step 1, the agent will test the Identify-leader condition on each clock tick until another agent (which may or may not already be a friend, and need not be proximal, but is not already in a leader-follower relationship with the agent) is identified as a leader on an issue with which the agent has agreement. Compatibility also depends

76

7 Narratives

upon disposition and upon the results of comparing value systems (Sect. 5.3). If a potential leader has been identified, then the Follow action will be executed. Let A stand for the agent whose narrative is executing this action, and B stand for the potential leader to follow. Agent B is added as a leader to the connections (Sect. 4.3.3) of agent A, and agent A is added as a follower connection of B. Finally, the disposition of A and B are updated to increment the reward element of their economies.

7.3.9 Find a Follower Suitably disposed agents who are leaders may seek to find followers. Among the concerns to test in a potential follower are agreement to a relevant issue and attitudes such as loyal. Depending on its disposition, a potential agent may not be identified as a follower by Indentify-follower, but may become a follower at another time when the Indentify-leader condition of its Find a Leader narrative is satisfied. Here message-passing (Sect. 4.3.1) is used to ask an agent if they want to be a follower, and to reply to requests: 1. 2. 3. 4. 5. 6. 7.

Find a Follower wait Indentify-follower then Start-recruit-mission if Check-follower then Send-recruit-message wait Poll-recruit-replies then No-action if Message-yes then Gain-follower if Message-no then Record-recruit-refusal if No-condition then End-mission loop

This narrative has a similar pattern to Make Friends in its use of message passing, because followers need to be asked for permission. The dual to this narrative is Reply to Leader , which is similar to Reply To Request .

7.3.10 Reply to Leader This narrative simply waits for a follower request to be received, stores the message in a variable local to the narrative, and replies in the affirmative or negative based on an assessment of the leader agent.

Reply To Request 1. wait Poll-recruit-requests then Reply-follower-message 2. loop

7.4 Missions

77

In Step 1, the agent will test the Indentify-follower condition on each clock tick until another agent (which may or may not already be a friend, and need not be proximal, and is not already a follower) is identified as a follower on an issue with which the agent has agreement. Compatibility also depends upon disposition and upon the results of comparing value systems (Sect. 5.3). If a potential follower has been identified, then the Recruit action will be executed. Let A stand for the agent whose narrative is executing this action, and B stand for the potential follower. Agent B is added to the connections (Sect. 4.3.3) of agent A, and agent A is added as a connection of B. Finally, the disposition of A and B are updated to increment the reward element of their economies. One unique feature of this narrative is that the assessment made by the prospective follower (who is executing this narrative) includes the attitude sceptical. Even though the prospective follower may be in agreement with the issue represented by the leader, prospective followers with a high (low) degree of the attitude sceptical will be less (more) likely to send an affirmative reply to the leader.

7.4 Missions As seen in Sects. 7.3.6 and 7.3.7, a narrative may start and end a mission. A mission is a sequence of steps within a narrative that is controlled in a special way. When a mission is started, state variables local to the mission are initialised to an objective, a target, and a status. The objective takes the form of a conditional that is tested at the beginning of each step when the mission is underway. If the objective is not satisfied, the mission continues by executing the step. If the objective is satisfied, the mission is immediately completed, and control passes to the step that has a matching End-mission action. The target of the mission is the agent to whom the actions in the mission will be directed. The status of a mission can take one of four states. A mission is LIVE while the objective is being satisfied. A mission is COMPLETED when the objective is satisfied. Any of the actions or conditions of a mission can set the status to CANCELLED, in which case control goes straight to the step containing the matching End-mission action. A mission is CLOSED when it has either completed or been cancelled, and any local state associated with the mission is discarded. In the Give Friends Care narrative above, The action of Step 1 is to start a care mission. Local state variables for the mission are initialised to the target (the needy agent), the objective (here, a conditional that tests whether the target agent has become healthy), and the status set to LIVE. Control then proceeds to the next step of the narrative. Missions are a useful way to represent time-extended sequences of actions because they evaluate the same conditional before each step in a sequence of steps, and the conditional does not need to be mentioned in each step of the mission. This

78

7 Narratives

is more powerful than the traditional while...do control structure of conventional programming languages because the condition is tested at every step instead of at the beginning of a loop.

7.5 User Interface In the graphical user interface (GUI) of Affinity, the execution state of all the narratives of a selected agent may be displayed (Fig. 7.1) as a mimic diagram. The agent is selected by clicking on it in the environment display (Fig. 4.2). The mimic diagram has a row of steps for each narrative, and the circle representing a step is filled in if the step counter for that narrative is currently executing that step. By this means it is possible for the user to view the progress through all the narratives of a selected agent as the simulation proceeds. The mimic diagram is updated at each tick of the simulation.

7.6 Concurrency The execution of all narratives for all agents is simulated concurrently. Affinity has been designed so that it is not necessary to use synchronisation of processes at the implementation (Java) level. At each tick, the event loop executes one step of each narrative in each agent. If the condition of a wait operation is not satisfied, then the next time the condition will be tested is at the next tick. Also, all conditions and actions are atomic, that is they are short pieces of Java code that execute without interruption. However, race conditions among the narratives of agents can and do occur. An example is when several agents decide to offer care to the same needy agent at about the same time. This is not prevented, because it does no harm to the simulation. While this may result in an untidy scrum of caregivers, the needy agent will be made healthy more quickly. It would be possible for a prospective caregiving agent to use a mutex to lock access to the needy agent, but the simulation deliberately does not provide a way for a synchronisation signals to be communicated among agents.

7.7 Discussion Narratives are a way of representing the process by which different types of relationships can be formed over time. All relationships influence and are influenced by the value systems and the internal economies of the agents. Friendship is considered a long-term relationship that can be dissolved in some circumstances. Caregiving/receiving is a shorter term relationship that forms and ends for a specific

Fig. 7.1 Mimic diagram displaying the progress of seven of the narratives of the currently selected agent. The filled circles indicate which steps in the narratives are currently being executed

7.7 Discussion 79

80

7 Narratives

purpose. Leading-following is a relationship which, unlike the others, depends on the importance component (Sect. 2.3) that agents ascribe to their values. Both leaders and followers hold that the value power is important, but behave in different ways because leaders have a higher degree of power that is pursued with more intensity, and followers have a low degree of power that may be pursued with greater or lesser intensity. The leader-follower model here also uses the sceptical attitude within the disposition of followers. In the Make Friends narrative, a ‘moral rule’ was built into the Gain-friend action so that the agent would not befriend another agent with whom it is in a care relationship. In human society, it is a social norm to consider it a best practice that agents in a care relationship should not become close friends simply because of the circumstances of the care relationship. However, a future implementation may not need to build moral rules into the conditions of narratives, but would define the narrative itself to act according to social norms that have been recruited by the agent into its own value system. Such an implementation would then use the agents’ dispositions to regulate the agents’ behaviour in relation to these narratives. This makes it possible to consider situations in which agents exercise some flexibility concerning social norms. This opens the way to simulating social systems in which social norms can be violated or not depending upon the disposition of agents. It is then possible to define actions of agents to react to violation of social norms, during which elements of their economies could be changed to reflect the way that agents react to social norms and violations of them. The current implementation of Affinity has not yet considered social norms as one of the concerns to be modelled, but this would be a useful future development.

Reference 1. Schwartz, S.H.: An overview of the Schwartz theory of basic values. Online Read. Psychol. Cult. 2(1) (2012)

Chapter 8

Analysis of Value Systems

Abstract This chapter describes the use of non-negative matrix factorisation (NMF), an algorithm for the decomposition of data sets into a small number of latent features. Coupled with a model selection mechanism, NMF is an efficient method for identification of latent features and provides a powerful method for the discovery of latent patterns in data. We use convex NMF together with a model selection policy based on consensus clustering to analyse data about the states of agents in the simulation. This can provide insight into relationships between individuals and about the general wellbeing of the simulated population.

8.1 Introduction In this chapter we apply non-negative matrix factorisation [4] to data sets resulting from the value systems of simulated agents. The aim of this analysis is to find latent or ‘hidden’ patterns in the data. Such latent patterns may be informative about the affinities and values of individuals or populations, and about any inter-relations between affinities, values and individuals. For example, in some countries, loyalty to a football club correlates with the political views of fans, and these correlations can be estimated by statistical methods [3, 5]. In the experiments described below, we will not look at social objects such as football teams and political issues because the simulated agents do not engage with these particular social objects. Instead, the analysis here considers whether there are any latent variables that explain how the value systems of agents in the simulation have developed after several hundred ticks have elapsed.

© The Author(s), under exclusive license to Springer Nature Switzerland AG 2024 W. F. Clocksin, Computational Modelling of Robot Personhood and Relationality, SpringerBriefs in Computer Science, https://doi.org/10.1007/978-3-031-44159-2_8

81

82

8 Analysis of Value Systems

8.2 The Data Set Because matrix factorisation in general has been used for a wide variety of applications in which the data take different forms, we first define here a uniform terminology. The data are represented by an value matrix V of size of N × M, where each V i j represents a parameter j from the value system of agent i. The data set shown in Table 8.1 is reduced in size here to illustrate the concept. Each of the N agents is represented by a row of matrix V . For row i, each V i j is one of the sampled quantities from agent i’s value system. In what follows these will be called propositions. The columns of V are (in order from left to right) the basic values benevolence, universalism, security, stimulation, the attitude forgiving, and the elements of the agent’s economy energy, attach, stress and reward. The values of matrix V are shown together with agent and parameter headings in Table 8.1. A number of applications have used this format of matrix V for analysis. In recommender systems, the V i j represent ratings of M products by N reviewers. In document analysis, the V i j represent the frequency weights of M terms in a set of N documents. In gene analysis, the V i j represent expression levels of M observations of N genes. Using our terminology, the V i j represent samples from M parameters from the value systems of N agents. The aim is to find a small number R of latent features, which when combined in linear combination, will approximate V and thereby represent an ‘explanation’ of the data. Several standard techniques are used to find features in data, including analysis of variance, principal components analysis, vector quantisation, Bayesian estimation, and singular value decomposition. However, we use the more recent technique of non-negative matrix factorisation (NMF) [4] because non-negativity is inherent in the data being considered, and NMF automatically clusters the data making interpretation easier. Depending on the cost function to be minimised, NMF is formally equivalent to K-means clustering and probabilistic latent semantic analysis.

8.3 Non-negative Matrix Factorisation Given a non-negative matrix V and a desired number of latent features R, an NMF algorithm iteratively calculates non-negative matrix factors W and H such that V ≈ W H. When V is a N × M matrix, and the number of features chosen for the approximation is R, where R is smaller than M and N , then W is a N × R features matrix and H is a R × M coefficients matrix. R is the desired rank of the factorisation problem. The algorithm starts by initialising W and H to small random values, which are iteratively updated to minimise a cost function. Our algorithm implements both alternative formulations of NMF introduced by [4], and additionally enforces a

8.3 Non-negative Matrix Factorisation

83

Table 8.1 Small data example. Each table row represents a set of parameters from an agent’s value system. These are the values of V Agent

benevol univers security stimul

forgive energy

attach

stress

reward

A1

0.71

0.80

0.19

0.02

0.74

0.00

0.00

0.00

1.00

A2

0.24

0.83

0.32

0.01

0.87

0.51

0.00

0.00

0.80

A3

0.69

0.59

1.00

0.68

0.63

0.30

0.00

0.00

1.00

A4

0.97

0.22

0.90

0.86

0.83

0.56

0.00

0.00

1.00

A5

0.73

0.91

0.00

0.18

0.57

0.69

0.00

0.00

1.00

A6

0.45

0.75

0.78

0.92

0.89

0.74

0.63

0.00

1.00

A7

0.38

0.00

0.46

0.35

0.61

0.39

0.00

0.00

1.00

A8

0.20

0.14

0.74

0.23

1.00

0.31

0.00

0.00

0.83

A9

0.54

0.09

0.88

0.62

0.61

0.98

0.95

0.00

1.00

A10

0.72

0.62

0.51

0.64

0.73

1.00

0.27

0.00

0.63

A11

0.28

0.90

0.50

0.00

0.25

0.37

0.00

0.00

1.00

A12

0.19

1.00

0.73

0.38

0.74

0.84

0.09

0.00

0.67

A13

1.00

0.61

0.50

0.39

0.45

0.00

0.60

0.92

1.00

A14

0.00

0.74

0.00

0.46

0.35

0.89

0.85

0.00

1.00

A15

0.29

0.00

0.85

0.00

0.34

1.00

0.00

0.00

0.46

A16

0.24

0.90

0.82

0.21

0.00

0.71

0.92

0.00

0.26

A17

0.69

0.73

0.21

0.54

0.60

0.83

0.00

0.00

0.62

A18

0.08

0.18

0.15

0.34

0.33

0.66

0.73

0.00

0.79

A19

0.00

0.09

0.85

0.70

0.06

0.00

1.00

1.00

0.73

A20

0.22

0.42

0.13

0.03

0.25

0.41

0.90

0.37

0.50

A21

0.66

0.26

0.65

0.00

1.00

1.00

0.00

0.00

0.81

A22

0.10

0.31

0.19

0.93

0.01

0.88

0.01

0.00

0.72

A23

0.24

0.58

0.09

0.58

0.82

1.00

0.00

0.00

0.68

A24

1.00

0.65

0.54

0.76

0.20

0.99

0.99

0.00

0.46

A25

0.23

0.61

0.48

0.53

0.30

0.02

0.00

0.00

0.91

A26

0.69

0.66

0.56

0.38

1.00

1.00

0.02

0.00

0.49

A27

0.40

0.37

0.00

0.76

0.35

0.58

0.35

0.00

1.00

A28

0.39

0.16

1.00

0.23

0.88

1.00

0.00

0.00

0.73

A29

0.84

0.07

0.69

0.07

0.20

0.00

1.00

1.00

0.61

A30

0.44

0.00

0.37

0.49

0.46

0.00

1.00

1.00

0.58

A31

0.96

0.51

0.41

0.99

0.60

1.00

0.77

0.00

0.74

A32

0.70

0.08

0.18

0.24

1.00

0.25

0.00

0.00

1.00

A33

0.97

0.50

0.41

0.77

0.12

0.00

1.00

1.00

1.00

A34

0.75

0.36

0.64

0.00

0.15

0.00

0.16

0.50

1.00

A35

1.00

0.77

0.22

0.78

0.07

0.84

1.00

0.00

1.00

A36

0.64

0.86

0.57

0.46

0.34

0.72

0.00

0.00

0.95

A37

1.00

0.26

0.97

0.31

0.00

0.00

0.74

0.94

0.43

A38

0.82

0.85

0.72

0.39

0.82

1.00

0.00

0.00

0.68

A39

0.32

0.26

0.56

0.50

0.54

0.00

0.00

0.00

0.56

A40

0.64

0.33

0.28

0.56

0.81

0.99

0.00

0.00

0.58

84

8 Analysis of Value Systems

convexity constraint on W , which improves the quality of the clustering result [2]. The two alternative cost functions in [4] are the squared error (or Frobenius norm), and an extension of the Kullback-Leibler divergence to positive matrices. The squared error the function F(W , H) = V − W H2F where  version minimises 2 2 A − B F = i j (Ai j − Bi j ) using the following iterative update rules: Hrm ← Hrm

(W T V )r m (W T W H)r m

(8.1)

W nr ← W nr

(V H T )nr (W H H T )nr

(8.2)

and

The divergence version minimises the function D(V W H) where D( AB) =  Ai j i j ( Ai j log B i j − Ai j + B i j ) using the following update rules:  Hrm ← Hrm

i



and W nr ← W nr

m

W ir V im /(W H)im  j W jr

(8.3)

H r m V nm /(W H)nm  j Hr j

(8.4)

For large problems, the number of desired features R can be significantly smaller than both M and N . In the illustrative example of Table 8.1, M and N are already small, so we set the desired number of features R = 3 to explore whether the variance in the data can be explained by a smaller matrix of rank 3. We impose a convexity constraint on the problem by normalising each column of W to sum to 1 after each iteration of the chosen update rule. The benefits of the convexity constraint in providing a sharper distinction between principal and non-principal component subspaces are discussed in [2]. A given factorisation V ≈ W H can be used for data analysis in various ways. The prediction error | V − W H | can be used to evaluate whether some combinations of agent and proposition are ‘easier’ to predict than others. Matrix H can be used to group the M samples into k clusters. Each sample is placed into a cluster corresponding to the maximum feature in the sample; that is, sample j is placed in cluster i if H i j is the largest entry in column j. Matrix H can be used in a similar way, using maximum values in each row. For clustering to be reliable, a method for model selection is recommended.

8.5 Results

85

8.4 Model Selection For any value of R, the NMF algorithm groups the samples into clusters, and it is useful to determine whether a given R decomposes the samples into clusters that have a meaningful interpretation. Because the NMF algorithm is not guaranteed to find the lowest error solution but only a local minimum, the algorithm may or may not converge to the same solution on each run, depending on the random initial conditions. We have adapted a method for consensus clustering [1] to ensure that groupings into clusters are consistent and stable over many runs of the NMF algorithm with different initial conditions. Our version of this method creates a consensus matrix C of size M × M, with entry C i j representing the probability that propositions i and j belong to the same cluster. For each run of NMF, the coefficients matrix H provides the cluster membership. If H k j > H i j for all i = j, this suggests that proposition j belongs to cluster k. While a given run may assign a given proposition to a different cluster because of different initial conditions, the key observation is that a set of propositions will consistently belong to the same clusters over many runs. The cluster consistency method therefore abstracts away from cluster membership of a cluster k from a given run, and identifies consistent membership of the same propositions within a cluster on each run. Each run of NMF updates C by incrementing all C i j for which proposition i and j belong to the same cluster according to matrix H. At the end of all the runs, we normalise C by dividing its values by the total number of runs. The C i j therefore range from 0.0 to 1.0 and reflect the probability that samples i and j are in the same cluster. If a clustering is stable, we expect that C will tend not to vary among runs, and that the entries of C will be close to 0.0 or 1.0. In our experiments we run NMF 100 times to construct matrix C.

8.5 Results Non-matrix factorisation can reveal a variety of latent information from the data matrix. Here we look at two types of information: Prediction and Clustering. In the experiments described here, we used the data shown in Table 8.1, set R = 3, and iterated the update rules until the root mean square error between V and W H dropped to below 10−7 . For the squared error rule (Eqs. 8.1, 8.2), convergence was achieved in around 800 iterations. For the divergence rule (Eqs. 8.3, 8.4), convergence was achieved in around 500 iterations. For the model selection, 100 runs were performed to construct the consensus matrix. There was no difference in cluster results between using the squared error update rule or the divergence update rule.

86

8 Analysis of Value Systems

8.5.1 Prediction Using the matrix product W H, each (W H)i j can serve as a prediction of the data value V i j . This property is used by recommender systems to predict the rating a customer may give to an item they have not yet reviewed, and it is also used to impute values for missing data. Here, because the data are complete (for each agent there is a value in each column), we interpret the absolute error | V − W H | as an indication of how ‘easy’ it is to make a prediction. An error very near zero suggests that the prediction comes very close to the actual data. The larger the error is, this indicates that the available data are not sufficient to make a reliable prediction from the small number of latent variables (here R = 3) alone. Table 8.2 shows the absolute error | V − W H | to the nearest three decimals. Certain values stand out. Looking at the column sums, the total error for stress is lower than for other parameters, suggesting that it is easier to predict stress; By contrast, reward is the hardest to predict from the data given. Looking at the row sums, it is easier to predict A38’s parameters (the lowest total error) than A16’s (the largest total error). It is also the case that at this point in the simulation run, A38 is in a friendship group with four other members, while A16 has only one friend. When used in general, observations of this type may offer insight into characteristics of the disposition of agents that make them easy to predict and hard to predict.

8.5.2 Clustering The model selection method gave 100% consistent clustering over 100 runs. The two most consistent clusters of parameters consisted of {universalism, stimulation} and {security, forgiving, reward}. At first it is not obvious why universalism and stimulation should cluster together. However, universalism influences whether an agent will offer care to a non-friend, and stimulation is used as a coefficient to regulate attach whenever two agents are in close enough proximity to touch: An agent with a higher (or lower) degree of stimulation will get a higher (or lower) level of attach. As close proximity is needed to offer care, this might be the latent value that is revealed by this analysis. It is more obvious that the cluster {security, forgiving, reward} is revealed, as these three parameters are involved at various stages of re-friending during the Maintain Friendships narrative. The model selection method was also used to find clusters among agents using matrix W , from the observation that if W jk > W ji for all i = j, this suggests that agent j belongs to cluster k. Not all entries of the consistency matrix showed 100% consistent clustering over 100 runs, but most agents were in clusters with 100% consistency. For example, the clusters containing more than one agent were as follows:

8.5 Results

87

Table 8.2 The absolute error | V − W H |, with agent and parameter headings, and sums of rows and columns Agent

benevol univers security stimul forgive energy attach

stress reward Sum

A1

0.384

0.507

0.267

0.023

0.276

0.160

0.000

0.000

0.006

1.622

A2

0.061

0.324

0.184

0.132

0.272

0.321

0.000

0.000

0.102

1.397

A3

0.150

0.071

0.296

0.265

0.120

0.361

0.000

0.000

0.141

1.404

A4

0.030

0.378

0.196

0.253

0.085

0.125

0.000

0.000

0.002

1.070

A5

0.096

0.188

0.472

0.149

0.119

0.174

0.000

0.000

0.282

1.480

A6

0.012

0.231

0.184

0.039

0.017

0.401

0.158

0.225

0.120

1.387

A7

0.083

0.149

0.202

0.207

0.077

0.109

0.001

0.001

0.092

0.921

A8

0.386

0.309

0.585

0.061

0.063

0.091

0.000

0.000

0.199

1.694

A9

0.020

0.626

0.206

0.016

0.080

0.194

0.356

0.285

0.239

2.022

A10

0.281

0.427

0.093

0.223

0.106

0.251

0.108

0.078

0.370

1.938

A11

0.018

0.640

0.409

0.151

0.344

0.099

0.000

0.000

0.381

2.042

A12

0.026

0.029

0.099

0.119

0.421

0.090

0.031

0.026

0.122

0.963

A13

0.202

0.337

0.175

0.171

0.212

0.000

0.221

0.524

0.391

2.234

A14

0.000

0.328

0.478

0.189

0.136

0.040

0.267

0.279

0.255

1.973

A15

0.035

0.135

0.490

0.292

0.280

0.435

0.000

0.000

0.252

1.919

A16

0.000

0.361

0.505

0.343

0.062

0.140

0.297

0.301

0.442

2.451

A17

0.082

0.034

0.397

0.132

0.198

0.068

0.000

0.000

0.049

0.960

A18

0.009

0.344

0.034

0.196

0.242

0.321

0.301

0.206

0.322

1.976

A19

0.153

0.138

0.060

0.378

0.261

0.000

0.495

0.281

0.073

1.840

A20

0.000

0.247

0.096

0.262

0.159

0.096

0.008

0.068

0.058

0.994

A21

0.047

0.363

0.028

0.259

0.196

0.421

0.000

0.000

0.070

1.384

A22

0.009

0.234

0.077

0.118

0.123

0.134

0.003

0.003

0.172

0.874

A23

0.046

0.062

0.270

0.428

0.028

0.108

0.001

0.002

0.130

1.075

A24

0.242

0.102

0.228

0.044

0.231

0.272

0.359

0.304

0.624

2.406

A25

0.230

0.076

0.107

0.222

0.029

0.387

0.000

0.000

0.181

1.234

A26

0.038

0.042

0.009

0.228

0.301

0.198

0.006

0.005

0.362

1.188

A27

0.000

0.142

0.410

0.203

0.020

0.281

0.083

0.131

0.375

1.645

A28

0.100

0.030

0.417

0.305

0.127

0.040

0.000

0.000

0.149

1.167

A29

0.186

0.129

0.213

0.254

0.223

0.000

0.221

0.413

0.244

1.883

A30

0.084

0.268

0.113

0.113

0.013

0.000

0.256

0.396

0.057

1.300

A31

0.064

0.194

0.281

0.246

0.370

0.138

0.299

0.227

0.310

2.129

A32

0.196

0.230

0.151

0.087

0.491

0.078

0.000

0.000

0.095

1.328

A33

0.090

0.373

0.114

0.371

0.061

0.000

0.755

0.155

0.001

1.918

A34

0.325

0.282

0.100

0.110

0.183

0.001

0.357

0.250

0.343

1.951

A35

0.273

0.228

0.476

0.425

0.269

0.021

0.266

0.353

0.115

2.425

A36

0.026

0.233

0.143

0.165

0.439

0.148

0.001

0.001

0.054

1.210

A37

0.179

0.203

0.317

0.099

0.085

0.000

0.225

0.476

0.361

1.945

A38

0.115

0.117

0.117

0.089

0.085

0.091

0.000

0.000

0.210

0.825

A39

0.000

0.097

0.479

0.310

0.150

0.669

0.000

0.000

0.128

1.833

A40

0.082

0.462

0.211

0.123

0.152

0.304

0.001

0.001

0.107

1.442

Sum

4.360

9.670

9.691

7.799

7.107

6.766

5.077

4.990

7.988

12.997

88

8 Analysis of Value Systems

{A1, A3, A25} {A2, A12, A17, A23} {A4, A7, A8, A10, A15, A21, A32, A36} {A6, A27} {A13, A19, A29, A30, A33, A37} {A16, A18, A20, A24} {A22, A38} {A26, A28, A40} It is useful to compare these clusters with the friendship groups at the same point in the simulation. The friendship groups containing more than one agent were as follows: {A1, A39} {A2, A25} {A3, A4, A6, A27, A36} {A5, A14} {A7, A9, A24, A32} {A8, A11, A26, A38, A40} {A10, A12, A18, A20, A34, A37} {A15, A23} {A16, A35} {A17, A21, A22, A28, A31} There is negligible overlap between these clusters. While overlap might be expected because agents with similar parameters might be expected to cluster as friends as well as through latent similarity, they do not, and there is a good reason why this is. Friendships are formed when there is compatibility with the social object favourite-colour. The disposition of agents also plays a role in the friendship process, but the main point of comparison is favourite-colour: A friendship will not be formed unless the favourite-colour is similar, taking the aspects into account. By contrast, the clusters from model selection result from consistently finding given parameters predicted by the latent variable set. The parameters do not include favourite-colour, but only the ones in Table 8.1. In general, it is important to remember that every simulation run will give different results because the initial conditions are set to random values. However, because the rules that govern the operation of the environment are the same, we should expect to see consistent patterns of behaviour that depend upon the initial parameters.

8.6 Summary We have shown how the use of non-negative matrix factorisation can be used together with a model selection mechanism for identification of latent features in data. As an example, we have used convex NMF together with a model selection policy based on consensus clustering to analyse some of the parameters of agents’ value systems.

References

89

References 1. Brunet, J.-P., Tamayo, P., Golub, T.R., Mesirov, J.P.: Metagenes and molecular pattern discovery using matrix factorization. PNAS 101(12), 4164–4169 (2004) 2. Ding, C., Li, T., Jordan, M.I.: Convex and semi-nonnegative matrix factorizations. IEEE Trans. Pattern Anal. Mach. Intell. 32, 45–55 (2010) 3. Francisco J.P.S.: Football and politics: preference for football teams in Portugal correlates with political party identification. Master’s thesis, Universidade Católica Portuguesa (2017) 4. Lee, D.D., Seung, H.S.: Algorithms for non-negative matrix factorisation. In: Leen, T., Dietterich, T., Tresp, V. (eds.) Proceedings of the 13th International Conference on Neural Information Processing Systems, vol. 13, pp. 535–541. MIT Press (2000) 5. Sriyakul, T., Fangmanee, A., Jermsittiparsert, K.: Whether loyalty to a football club can translate into a political support for the club owner: an empirical evidence from Thai League. J. Polit. Law 11(3), 47–52 (2018)

Chapter 9

Conclusion

Abstract This chapter completes the book with a general summary of what has been covered and an example application. It concludes with a section on topics of general interest that are related to but beyond the scope of this book.

9.1 General Summary In this book we have considered how a cognitive entity–whether biological or artificial–might use their significant concerns and the significant concerns of those in which they are in relationship. Intelligence is manifested in humans who participate as persons in relationship with other persons and other entities: Human and non-human, living and non-living, tangible and intangible, real and imagined. One open question is whether this type of intelligence may emerge in artificial cognitive systems. While there can be no definitive answer at this time, we have explored how this type of intelligence can be modelled using agents as abstract entities that engage in relationships mediated by significant concerns including basic values and social objects. This exploration has been in the context of the hypothetical case of the android, a human-like robot that people would accept as equal to humans in how they perform and behave in society. An android self-identifies as a non-human with its own integrity as a person, and therefore able to participate in long-term relationships with other entities. The book has also described the progress so far in building a computational model, called Affinity, that can simulate some aspects of behaviour of agents that participate in relationships. Currently, simplified versions of three relationships are simulated: Friendship, caregiving/receiving, and leader-follower. Building a computational model demonstrates the need to focus on specific design decisions. For example, numerical quantities are represented not explicitly as numbers, but as random variate samples from normal distributions. This avoids the need to specify precise values for numerical parameters, it provides a way to compare distributions instead of instantiated numbers, and it introduces a degree of stochasticity that is conjectured to be a feature of biological neural systems. It also restricts numerical values to the © The Author(s), under exclusive license to Springer Nature Switzerland AG 2024 W. F. Clocksin, Computational Modelling of Robot Personhood and Relationality, SpringerBriefs in Computer Science, https://doi.org/10.1007/978-3-031-44159-2_9

91

92

9 Conclusion

range 0.0–1.0 useful for interpreting values as probabilities if need be, and values can be used directly in a non-negative matrix factorisation for further analysis of the population’s value systems. Also, the agents’ response to social objects is extended to use two factors called precept and accept, for a more realistic model of how social objects represent the topics around which relationships form. Furthermore, every concern held by an agent is instantiated as a state variable that may contain additional attributes and three parameters: Importance, degree, and intensity. Narratives and missions have been introduced as a way to simulate concurrent information-bearing states extended over time within the behaviour process of agents. As a consequence of concurrent operation, the simulation is susceptible to race conditions, which can occur when the sequence of operations is not determinate. In computer-controlled systems, race conditions are undesirable because the correct outcome may not be achieved. However, this is not seen as a vulnerability here because the desired outcome is a behaviour in the simulated world, and not the exactness of outcome or computational efficiency. A practical example that has been observed is when an agent requests care, and several other agents concurrently decide to execute missions to offer care not knowing that another agent has already offered care. This results in a scrum of caregivers moving toward the needy agent (the target), but the situation resolves when sufficient care has been given by one or more caregivers, at which point all the relevant caregiving missions to the same target are cancelled by their caregivers. While this race condition results in the computational inefficiency of initiating more caregiving missions than necessary, the care mission has been achieved. Such a race condition is not unlike coordination problems that can be observed in human society, and as such is also a behaviour observed in this simulation. Hundreds of simulations have been run using populations of between 30 and 50 agents. The simulation is initialised with a population of entities randomly placed in the world, and each agent has been given a random favourite colour with which to use as a social object, and a random set of internal states for the basic values, attitudes, and other social objects such as issues. Most simulations proceed with a pattern in which agents form friendships, and then seek and give care. Friendships break off and are sometimes reformed. After about a minute of real time, a stable situation has emerged, in which there are two or three large groups of friends (large meaning 8–12 agents), several small groups (small being two or three agents), and a few singletons. The outcome of a few large groups, a larger number of couples, and a few singletons, is consistently observed. It is not possible to conclude anything about the nature and formation of human social groups from these observations, but rather that these groups are determined by the initialised parameters, the disposition of agents, and narratives provided by the model. It is likely that other patterns of group membership would result from different settings of initial conditions, thresholds, and rules for how the elements of agents’ economies are updated. What does this simulation offer to the idea of significant concerns? One answer is to look to the top two levels of the value pyramid of Chap. 2, which are Social Impact and Life Changing. The simulation has initially focused upon the narratives of caregiving/receiving, forming friendship, the identification of leaders, and the

9.2 Example Application

93

recruitment of followers. Friendship is related to the values of affiliation and belonging. Caregiving is an example of self-transcendence, as the caregiver uses its energy in the service of others. Receiving care can be related to self-actualisation when the recipient of care accepts its vulnerability. In the simulation, an agent is disposed to receive care when (among other factors) its self-direction and power values do not have very high degree. Agents in the simulation have value states initialised to random values for both benevolence and universalism, which are used in the simulation to provide a propensity for offering care to members of the in-group and out-group respectively. Leading and following are about affiliation to abstract ideals or issues, which humans often invest with a significance far beyond sense. A fuller model would to focus more specifically upon activities in the top two layers of the value pyramids in Chap. 2.

9.2 Example Application While this model was informed by observations of human society, but not intended as a model of human society specifically, it is possible to run simulations that are analogues of well described human phenomena. One example arises from field research conducted by Edward Banfield [1] in rural southern Italy in 1955. He observed that people in that region displayed two codes of behaviour: One that promoted the success and well-being of their own immediate extended family, and an antagonistic one for members of other families characterised by distrust, vendetta, envy, and suspicion. People believed that the good fortune of those in other families would inevitably harm their own interests. A simulation in Affinity can be initialised so that each agent holds high degrees of benevolence and need for security, and a near-zero degree of universalism. Because the current version of Affinity does not yet model kinship, a set of issues (Sect. 4.3.2) was defined to represent proxies for kinship groups. Each agent is assigned an identity defined by allegiance to one of three Issues, in the simulation called green, yellow and orange. No political or social significance of these names is implied. Agents are initialised to hold a high degree of affinity for their own issue, and near-zero affinity for any other issue. As expected, the simulation initialised in that way converges to a population of three groups, where all members of a group hold the same issue. These groups are analogues of extended family groups. Some distance is maintained between groups, and caregiving is not offered across groups. Because of random assignment of other elements of the disposition, there are a few singletons who never affiliate to any group. A typical singleton will have low degree of need for security coupled with a high degree of self-direction. Because conflict actions are not implemented in Affinity, no outright harm is done between agents of different groups. Also, despite the simulations lasting over 2000 ticks and an inherent stochasticity in how quantities in the disposition are sampled, no cross-group friendships were made. An example of what might be expected if the initial conditions were relaxed to provide for cross-

94

9 Conclusion

group friendships is illustrated by William Shakespeare’s tragedy Romeo and Juliet, where the title characters find romance despite belonging to the feuding Montague and Capulet families respectively.

9.3 Beyond the Scope Several topics of interest are beyond the scope of this book, and yet provoke great interest among the public in discussions about artificial intelligence and robots. At least a minimal treatment of these topics cannot be avoided here.

9.3.1 Consciousness For a start, this book has nothing to say about consciousness. Consciousness is a subject that eludes precise definition and scientific explanation despite generations of the most talented neuroscientists and philosophers applying themselves to the problem. There must be at least a dozen credible and competing theories on consciousness, and significant progress has yet to be made. I suspect that one of the reasons why this is such an elusive concept is because everybody experiences consciousness, and nobody knows what it is like to experience an absence of consciousness. By its definition, if we are unconscious, we do not have conscious experience, and there is nothing to be remembered and nothing to account for. States of consciousness below the usual awake state such as dreaming also count as conscious experience, though may be experienced during semi-conscious states such as sleep. Because consciousness is a result of biological processes, there can be no consciousness after death, when all biological processes cease. This may be why human societies have held beliefs in life after death: Because we can’t imagine what it is like not to have consciousness for the rest of time, other imagined kinds of consciousness have filled the void and given meaning to our lack of knowledge. Setting aside the problem of unconsciousness, in most cases when humans engage in an activity, they know they are doing it: The caregiver knows they are giving care. By contrast, agents in the simulation are simply executing their narratives. At this stage of development, agents are not able to give an account of their activity and formulate explanations of their purpose. Consciousness is a great way to give an account of one’s activity, though such accounts may not be the same as the phenomenon of consciousness. Nonetheless, it is likely that a richer model will need to address this role of consciousness, or at least a form of reflexiveness that is accessible to the model. One possible way forward is ‘computational self-awareness’ [7], the ability of a system to learn about and model its own resources, capabilities, goals, and interactions. Because computational self-awareness takes place within a society of agents, Esterle and Brown [5] stress the importance of networked self-

9.3 Beyond the Scope

95

awareness, which is an approach for self-aware computing systems to also become sensitive to the existence of others. My favourite way of thinking about consciousness is that the phenomenon may be mediated by an as yet undiscovered principle of operation that defines a timeextended information-processing link between sensation, ideation, and memory. Novel principles of operation have been instrumental in the history of science. For example, some animals are able to fly, but the principle of operation determining flight is not found in feathers or flapping wings. Instead, there is a relationship between lift, thrust, drag and gravity that determines whether flight is possible. Flying animals and aircraft exploit this relationship. Similarly, the principle of electromagnetism, essential for modern technology, was not understood until James Clerk Maxwell (1831–1879) formulated a unifying relationship between electricity and magnetism. Could there be a principle that unifies sensation, ideation, and memory in a way that explains the phenomenon of consciousness?

9.3.2 Personality and Norms While the book has explored personhood and relationality, it has not considered the obvious connections between personhood and personality. It is possible that the causal underpinnings of personality can be related partly to the basic values and attitudes as used by the Affinity model. And, while personality may be an effect rather than a cause of personhood, a richer computational model of personality will benefit from work that is being carried out in explicitly modelling human personality [4, 8]. In addition to his theory of Basic Values, Schwartz [9] also lists other concerns of interest such as beliefs, norms, and traits, which play an important part in subjectivity and expectation in social relationships. We have mentioned the use of social norms in narratives (Sect. 7.7), and there is scope for future development of narratives that are attentive to social norms and violations of them. In this way, it is possible to model guilt and shame, dispositions that result from violation of social norms. It would then be possible to model the concepts around sacrifice, rivalry and vendetta [6] for which a preliminary and simplified model in narrative form has been previously proposed [2].

9.3.3 Generative AI Finally, a recent application of Machine Learning that has attracted much popular media attention is generative AI, which uses Large Language Models (LLMs) to generate passages of text [3]. A typical LLM has billions of parameters that need to be trained on massive datasets of text. A useful source of training data is the World Wide Web, which contains billions of text documents including news media, social

96

9 Conclusion

media stories and commentary. When trained, the LLM can summarise, generate and predict new passages of text. LLMs can be applied to a broad range of language processing tasks such as content summary, rewriting content in a different style, and chatbots. LLMs can enable conversation with a user that seems more natural than previous generations of chatbots. It is important to say that LLMs bring AI no closer to ‘understanding’ text. LLMs have enabled software to handle text in ways that are more natural for people to interact with, but such software does not know that it is conversing with a person who has its own motivations, thoughts and feelings. The databases of text on which LLMs are trained are enormous but may not be representative. Some cultures, groups and subjects are oversampled; many others are neglected. Depending on how the training data is sampled, the prejudices, limitations and toxic aspects of ‘internet culture’ can be present in the trained model. The outputs of current chatbots are approximations resulting from a statistically likely summary of the text dataset given the user’s prompts and the model. Such approximations can have a tendency to reproduce plausible sounding but nonsensical answers, misinformation and prejudice, to assert falsehoods as facts, and to synthesise fictitious content that purports to be factual accounts. Despite these shortcomings, or perhaps because of them, some humans have been induced into feelings of relationship with a chatbot. This says more about the propensity of humans to engage in relationship, than it does about the relationality of a chatbot. Surely further progress on refining the software will reduce the scope for error, but a problem in principle remains that chatbots will be incomplete until they can operate with a model of how an appropriate conversation is conducted, what a conversation with a person means to both parties, and how people construct meaning from conversations. It is hoped that a better understanding of personhood and relationality, together with an understanding of why certain concerns are so significant for people that they transcend ordinary human activity and discourse, may go some way to satisfying these requirements.

References 1. Banfield, E.C.: The Moral Basis of a Backward Society, Free Press (1958) 2. Clocksin, W.F.: Knowledge representation and myth. In: Cornwell, J. (ed.) Nature’s Imagination, pp. 190–199. Oxford University Press (1995) 3. Cohen, A.D., Roberts, A., Molina, A., Butryna, A., Jin, A., Kulshreshtha, A., Hutchinson, B., Zevenbergen, B., Aguera-Arcas, B.H., ching Chang, C., Cui, C.: Cosmo Du, Daniel De Freitas Adiwardana, Dehao Chen, Dmitry (Dima) Lepikhin. In: Chi, H., Hoffman-John, E., Cheng, H.-T., Lee, H., Krivokon, I., Qin, J., Hall, J., Fenton, J., Soraker, J., Meier-Hellstern, K., Olson, K., Aroyo, L.M., Bosma, M.P., Pickett, M.J., Menegali, M.A., Croak, M., Díaz, M., Lamm, M., Krikun, M., Morris, M.R., Shazeer, N., Le, Q.V., Bernstein, R., Rajakumar, R., Kurzweil, R., Thoppilan, R., Zheng, S., Bos, T., Duke, T., Doshi, T., Zhao, V.Y., Prabhakaran, V., Rusch, W., Li, Y., Huang, Y., Zhou, Y., Xu, Y., Chen, Z. (eds.) LaMDA: Language Models for Dialog Applications (2022). arXiv:2201.08239v3 4. DeYoung, C.G.: Cybernetic big five theory. J. Res. Pers. 56(32–51) (2015)

References

97

5. Esterle, L., Brown, J.N.A.: I think therefore you are: Models for interaction in collectives of self-aware cyber-physical systems. ACM Trans. Cyber-Phys. Syst. 4(4), 1–25 (2020) 6. Girard, R.: Violence and the Sacred. Johns Hopkins University Press, Baltimore (1977) 7. Lewis, P.R., Platzner, M., Rinner, B., Tørresen, J., Yao, X. (eds.): Self-aware Computing Systems: An Engineering Approach. Springer (2016) 8. Quirin, M., Robinson, M.D., Rauthmann, J.F., Kuhl, J., Read, S.J., Tops, M., DeYoung, C.G.: The dynamics of personality approach (DPA): 20 tenets for uncovering the causal mechanisms of personality. Eur. J. Pers. 34, 947–968 (2020) 9. Schwartz, S.H.: An overview of the Schwartz theory of basic values. Online Read. Psychol. Cult. 2(1) (2012)

Index

A absence, 52 accept, 34 accept, 39, 62–64 achievement, 60 action, 69 affect, 5 Affinity, 45 age, 52 agent, 25, 46, 49 Am-healthy, 71 android, 3, 6 android intelligence, 3, 6 Android Science, 8 anthropomorphism, 27 Artificial General Intelligence, 4 Artificial Intelligence, 4 Ask Care , 70, 71 aspect, 34, 62 attach, 66, 67, 82, 86 attitude, 62 attribute, 62

B benevolence, 39, 50, 52, 59–61, 74, 82, 93

C Care-target-far, 74, 75 Care-target-near, 74, 75 Carer , 49, 74 Check-follower, 76 Check-friend, 72 circumstances, 32

colour, 39, 49, 63, 88 conformity, 60 connection, 52 continuum, 27, 30 credible, 62 crowding, 66

D degree, 19, 61 different, 59 disposition, 25, 50, 52

E economy, 32, 50, 65 element, 65 elements of value, 20 else, 72 emotions, 5 empathy, 27 End-mission, 72, 74–77 energy, 65–67, 70, 82 environment, 46 ex-connection, 52

F favourite- colour, 39, 49, 50, 63, 88 Find a Follower , 76 Find a Leader , 75, 76 Find-centre-compat-friends, 71 Find-centre-friends, 71 Find-near-compat, 71 Find-needy-friend, 74

© The Author(s), under exclusive license to Springer Nature Switzerland AG 2024 W. F. Clocksin, Computational Modelling of Robot Personhood and Relationality, SpringerBriefs in Computer Science, https://doi.org/10.1007/978-3-031-44159-2

99

100 Find-needy-other, 75 flag, 46, 49 flagpole, 46, 49 Follow, 75, 76 forgiving, 39, 52, 62, 82, 86 friendship, 32

G Gain-follower, 76 Gain-friend, 72, 73, 80 Generative AI, 95 Give Friends Care , 74, 75 Give Others Care , 74, 75 Give-care, 74, 75 green, 51, 93

H hedonism, 60, 66, 73

I Identify-leader, 75 identity, 29 identity, 51 if, 70–76 implied personhood, 11 importance, 19, 61 importance, 39 Indentify-follower, 76, 77 Indentify-leader, 76 information-processing, 28 instantiation, 60 intensity, 19, 61 issue, 33, 93 issue, 50, 63, 75

K Kepler’s conjecture, 66

L Large Language Model, 95 loop, 70–76 Lower-carer-sign, 74, 75 Lower-need-sign, 71 loyal, 62, 76

M Machine Learning, 5, 10 Maintain Friendships, 73, 86

Index Maintain-friends, 73 Make Friends, 72, 73, 76, 80 marginal, 59 Mediator, 50 Meet-proximal-compat, 72 Message-no, 72, 76 Message-yes, 72, 76 mimic diagram, 78 mission, 70, 72, 77 Motility , 71 Move-closer-point, 71 Move-to-target, 74, 75 myth, 10

N narrative, 32, 45, 50, 69 Need Care , 49, 71, 74 Need-care, 71 networked self-awareness, 95 No-action, 72, 76 No-condition, 72–76

O objective, 77 operation, 69 orange, 93

P percept, 49 performance, 8, 28 performativity, 31 person, 6, 7 personhood, 3, 6, 26, 27, 36 personification, 36 physical symbol system hypothesis, 4 Poll-friend-replies, 72 Poll-friend-requests, 73 Poll-recruit-replies, 76 Poll-recruit-requests, 76 power, 60, 61, 80, 93 precept, 34 precept, 39, 62–64 Prolog, 9

R Raise-carer-sign, 74, 75 Raise-need-sign, 71 recognisability, 31 Record-friend-refusal, 72, 73

Index Record-recruit-refusal, 76 Recruit, 77 relationality, 3, 30 Reply to Leader , 76 Reply To Request , 72, 73, 76 Reply to Request , 72 Reply-follower-message, 76 Reply-friend-message, 73 reward, 66, 67, 73, 76, 77, 82, 83, 86 robot, 3, 6

S sceptical, 62, 77, 80 security, 39, 50, 60, 61, 73, 82, 86, 93 self-actualising, 11 self- direction, 39, 50, 60–62, 74, 93 Send-friend-message, 72 Send-recruit-message, 76 separated, 59 sign, 46, 49 significant concerns, 3, 15 similar, 59 social constructionist, 11 social object, 33, 62 Social Robotics, 8 spiritual intelligence, 21 spiritual relationship, 36 Start-care-mission, 74, 75 Start-friend-mission, 72 Start-recruit-mission, 76 state, 57, 60 status, 77

101 step, 69 stimulation, 60, 82, 86 stress, 66, 67, 70, 82, 86 swimming- enjoyable, 63

T target, 77 termness, 52 then, 70–76 tick, 46 tradition, 60 transcendent, 11

U universalism, 50, 60, 61, 74, 75, 82, 86, 93

V value, 16 value memory, 50 value system, 32, 60 values, 60

W wait, 70–76, 78 while, 70, 74, 75

Y yellow, 51, 93