Automation Challenges of Socio-technical Systems 9781119644545, 1119644542, 9781786304223

281 99 21MB

English Pages [365] Year 2019

Report DMCA / Copyright

DOWNLOAD FILE

Polecaj historie

Automation Challenges of Socio-technical Systems
 9781119644545, 1119644542, 9781786304223

Citation preview

Automation Challenges of Socio-technical Systems

Series Editor Jean-Paul Bourrières

Automation Challenges of Socio-technical Systems

Edited by

Frédéric Vanderhaegen Choubeila Maaoui Mohamed Sallak Denis Berdjag

First published 2019 in Great Britain and the United States by ISTE Ltd and John Wiley & Sons, Inc.

Apart from any fair dealing for the purposes of research or private study, or criticism or review, as permitted under the Copyright, Designs and Patents Act 1988, this publication may only be reproduced, stored or transmitted, in any form or by any means, with the prior permission in writing of the publishers, or in the case of reprographic reproduction in accordance with the terms and licenses issued by the CLA. Enquiries concerning reproduction outside these terms should be sent to the publishers at the undermentioned address: ISTE Ltd 27-37 St George’s Road London SW19 4EU UK

John Wiley & Sons, Inc. 111 River Street Hoboken, NJ 07030 USA

www.iste.co.uk

www.wiley.com

© ISTE Ltd 2019 The rights of Frédéric Vanderhaegen, Choubeila Maaoui, Mohamed Sallak and Denis Berdjag to be identified as the authors of this work have been asserted them in accordance with the Copyright, Designs and Patents Act 1988. Library of Congress Control Number: 2019937198 British Library Cataloguing-in-Publication Data A CIP record for this book is available from the British Library ISBN 978-1-78630-422-3

Contents

Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

xi

Frédéric VANDERHAEGEN, Choubeila MAAOUI, Mohamed SALLAK and Denis BERDJAG Part 1. Perceptual Capacities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

1

Chapter 1. Synchronization of Stimuli with Heart Rate: a New Challenge to Control Attentional Dissonances . . . . . . . . . . . . . .

3

Frédéric VANDERHAEGEN, Marion WOLFF and Régis MOLLARD 1.1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . 1.2. From human error to dissonance . . . . . . . . . . . . . 1.3. Cognitive conflict, attention and attentional dissonance 1.4. Causes and evaluation of attentional dissonance . . . . 1.5. Exploratory study of attentional dissonances . . . . . . 1.6. Results of the exploratory study . . . . . . . . . . . . . 1.7. Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . 1.8. References . . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

3 4 7 9 11 14 22 24

Chapter 2. System-centered Specification of Physico–physiological Interactions of Sensory Perception . . . . . . . . . . . . . . . . . . . . . . . . . .

29

Jean-Marc DUPONT, Frédérique MAYER, Fabien BOUFFARON, Romain LIEBER and Gérard MOREL 2.1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.2. Situation-system-centered specification of a sensory perception interaction . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.2.1. Multidisciplinary knowledge elements in systems engineering . 2.2.2. Interdisciplinary knowledge elements in systems engineering . 2.2.3. Specification of a situation system of interest . . . . . . . . . .

. . . . . .

29

. . . .

31 32 38 44

. . . .

. . . .

. . . .

. . . .

. . . .

vi

Automation Challenges of Socio-technical Systems

2.3. Physiology-centered specification of a sensory perception interaction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.3.1. Multidisciplinary knowledge elements of a physico–physiological interaction . . . . . . . . . . . . . . . . . . . . . . 2.3.2. Prescriptive specification of the targeted interaction of auditory perception . . . . . . . . . . . . . . . . . . . . . . 2.4. System-centered specification of an interaction of sensory perception . 2.4.1. System-centered architecting specification of the targeted auditory interaction . . . . . . . . . . . . . . . . . . . . . . . 2.4.2. Sensing-centered specification of the targeted auditory interaction 2.4.3. System-centered sensing specification of the targeted auditory interaction . . . . . . . . . . . . . . . . . . . . . . . 2.5. Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.6. References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

. . . .

51

. . . .

52

. . . . . . . .

57 61

. . . . . . . .

61 65

. . . . . . . . . . . .

67 72 74

Part 2. Cooperation and Sharing of Tasks . . . . . . . . . . . . . . . . . . . . .

81

Chapter 3. A Framework for Analysis of Shared Authority in Complex Socio-technical Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

83

Cédric BACH and Sonja BIEDE 3.1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.2. From the systematic approach to the systemic approach: a different approach of sharing authority and responsibility . . . . . . . 3.3. A framework of analysis and design of authority and responsibility 3.3.1. Actions in a perspective of authority, responsibility and accountability . . . . . . . . . . . . . . . . . . . . . 3.3.2. Levels of authority and responsibility . . . . . . . . . . . . . . . 3.3.3. Patterns of actions in relation to authority and responsibility . . 3.3.4. Dynamic relations between the dimensions of the analysis framework . . . . . . . . . . . . . . . . . . . . . . . . . . 3.4. Management of wake turbulence in visual separation: a study of preliminary cases . . . . . . . . . . . . . . . . . . . . . . . . . 3.4.1. At the nano level . . . . . . . . . . . . . . . . . . . . . . . . . . 3.4.2. At the micro level . . . . . . . . . . . . . . . . . . . . . . . . . . 3.4.3. At the meso level . . . . . . . . . . . . . . . . . . . . . . . . . . 3.4.4. At the macro level . . . . . . . . . . . . . . . . . . . . . . . . . 3.5. Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.6. References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . .

83

. . . . . . . . . . . .

86 88

. . . . . . . . . . . . . . . . . .

89 92 96

. . . . . .

103

. . . . . . .

104 106 106 107 107 108 108

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

Contents

Chapter 4. The Design of an Interface According to Principles of Transparency . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

vii

111

Raïssa POKAM MEGUIA, Serge DEBERNARD, Christine CHAUVIN and Sabine LANGLOIS 4.1. Introduction . . . . . . . . . . . . . . . . . . . . . . . 4.2. State of the art . . . . . . . . . . . . . . . . . . . . . . 4.2.1. Situational awareness . . . . . . . . . . . . . . . . 4.2.2. Transparency . . . . . . . . . . . . . . . . . . . . 4.3. Design of a transparent HCI for autonomous vehicles 4.3.1. Presentation of the approach . . . . . . . . . . . . 4.3.2. Definition of the principles of transparency . . . 4.3.3. Cognitive work analysis . . . . . . . . . . . . . . 4.4. Experimental protocol . . . . . . . . . . . . . . . . . . 4.4.1. Interfaces . . . . . . . . . . . . . . . . . . . . . . 4.4.2. Hypotheses . . . . . . . . . . . . . . . . . . . . . 4.4.3. Participants . . . . . . . . . . . . . . . . . . . . . 4.4.4. Equipment . . . . . . . . . . . . . . . . . . . . . . 4.4.5. Driving scenarios . . . . . . . . . . . . . . . . . . 4.4.6. Measured variables . . . . . . . . . . . . . . . . . 4.4.7. Statistical approach . . . . . . . . . . . . . . . . . 4.5. Results and discussions . . . . . . . . . . . . . . . . . 4.5.1. Situational awareness . . . . . . . . . . . . . . . . 4.5.2. Satisfaction of the participants . . . . . . . . . . . 4.6. Conclusion . . . . . . . . . . . . . . . . . . . . . . . . 4.7. Acknowledgments . . . . . . . . . . . . . . . . . . . . 4.8. References . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . .

111 113 113 114 118 118 119 125 132 132 134 134 135 136 138 139 140 140 143 145 146 146

Part 3. System Reliability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

151

Chapter 5. Exteroceptive Fault-tolerant Control for Autonomous and Safe Driving . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

153

Mohamed Riad BOUKHARI, Ahmed CHAIBET, Moussa BOUKHNIFER and Sébastien GLASER 5.1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 153 5.2. Formulation of the problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 157 5.3. Fault-tolerant control architecture . . . . . . . . . . . . . . . . . . . . . . . . . 158 5.3.1. Vehicle dynamics modeling . . . . . . . . . . . . . . . . . . . . . . . . . . 159 5.4. Voting algorithms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 162 5.4.1. Maximum likelihood voting (MLV) . . . . . . . . . . . . . . . . . . . . . 162 5.4.2. Weighted averages (WA) . . . . . . . . . . . . . . . . . . . . . . . . . . . 163 5.4.3. History-based weighted average (HBWA) . . . . . . . . . . . . . . . . . . 164 5.5. Simulation results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 167 5.6. Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 175 5.7. References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 176

viii

Automation Challenges of Socio-technical Systems

Chapter 6. A Graphical Model Based on Performance Shaping Factors for a Better Assessment of Human Reliability . . . . . . . . . . . . .

179

Subeer RANGRA, Mohamed SALLAK, Walter SCHÖN and Frédéric VANDERHAEGEN 6.1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . . 6.2. PRELUDE methodology . . . . . . . . . . . . . . . . . . 6.2.1. Theoretical framework . . . . . . . . . . . . . . . . . 6.2.2. The qualitative part . . . . . . . . . . . . . . . . . . . 6.2.3. The quantitative part . . . . . . . . . . . . . . . . . . 6.2.4. Quantification and sensitivity analysis . . . . . . . . 6.3. Case study . . . . . . . . . . . . . . . . . . . . . . . . . . 6.3.1. Step 1, qualitative part: HFE and PSF identification . 6.3.2. Step 2, quantitative part: expert elicitation, data combination and transformation . . . . . . . . . . . . . 6.3.3. Step 3, quantification data and results . . . . . . . . . 6.4. Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . 6.5. Acknowledgments . . . . . . . . . . . . . . . . . . . . . . 6.6. References . . . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

179 186 188 193 198 205 209 211

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

213 216 221 224 224

Part 4. System Modeling and Decision Support . . . . . . . . . . . . . . . . . .

231

Chapter 7. Fuzzy Decision Support Model for the Control and Regulation of Transport Systems . . . . . . . . . . . . . . . . . . . . . . . . . . .

233

Saïd HAYAT and Saïd Moh AHMAED 7.1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.2. The problem of decision support systems in urban collective transport . . 7.3. Montbéliard’s transport network . . . . . . . . . . . . . . . . . . . . . . . 7.3.1. Connections . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.3.2. The regulation of an urban collective transport network . . . . . . . 7.4. Fuzzy aid decision-making model for the regulation of public transport . 7.4.1. Knowledge acquisition . . . . . . . . . . . . . . . . . . . . . . . . . . 7.4.2. Decision criteria for the regulation of public transport traffic . . . . . 7.4.3. Criteria modeling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.4.4. The fuzzification process . . . . . . . . . . . . . . . . . . . . . . . . . 7.4.5. Generation of decisions. . . . . . . . . . . . . . . . . . . . . . . . . . 7.4.6. Defuzzification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.4.7. Types of decisions . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.4.8. Suggestions of regulatory strategies . . . . . . . . . . . . . . . . . . . 7.4.9. Impact and validation of regulatory strategies . . . . . . . . . . . . . 7.4.10. Implementation of regulatory strategies . . . . . . . . . . . . . . . . 7.5. Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.6. References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . .

233 234 235 236 237 239 240 242 243 244 247 249 255 258 258 258 259 259

Contents

Chapter 8. The Impact of Human Stability on Human–Machine Systems: the Case of the Rail Transport . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

ix

261

Denis BERDJAG and Frédéric VANDERHAEGEN 8.1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.2. Stability and associated notions . . . . . . . . . . . . . . . . . . . 8.2.1. Resilience . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.2.2. Stability within the technological context. . . . . . . . . . . . 8.2.3. Mathematical definition of stability in the sense of Lyapunov 8.2.4. Lyapunov’s theorem . . . . . . . . . . . . . . . . . . . . . . . 8.3. Stability in the human context . . . . . . . . . . . . . . . . . . . . 8.3.1. Definition of human stability . . . . . . . . . . . . . . . . . . 8.3.2. Definition of the potential of action and reaction . . . . . . . 8.4. Stabilizability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.5. Stability within the context of HMS . . . . . . . . . . . . . . . . . 8.6. Structure of the HMS in the railway context . . . . . . . . . . . . 8.6.1. General structure . . . . . . . . . . . . . . . . . . . . . . . . . 8.6.2. The supervision module . . . . . . . . . . . . . . . . . . . . . 8.6.3. The technological system model . . . . . . . . . . . . . . . . . 8.6.4. The human operator model . . . . . . . . . . . . . . . . . . . . 8.7. Illustrative example . . . . . . . . . . . . . . . . . . . . . . . . . . 8.7.1. Experimental protocol . . . . . . . . . . . . . . . . . . . . . . 8.7.2. Experimental results . . . . . . . . . . . . . . . . . . . . . . . 8.7.3. Remarks and discussion . . . . . . . . . . . . . . . . . . . . . 8.8. Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.9. References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . .

261 262 263 263 264 265 265 265 267 267 268 269 269 271 271 272 273 273 279 280 281 282

Part 5. Innovative Design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

285

Chapter 9. Development of an Intelligent Garment for Crisis Management: Fire Control Application . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

287

Guillaume TARTARE, Marie-Pierre PACAUX-LEMOINE, Ludovic KOEHL and Xianyi ZENG 9.1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . . 9.2. Design of an intelligent garment for firefighters . . . . . 9.2.1. Wearable system architecture . . . . . . . . . . . . . 9.2.2. Choice of electronic components . . . . . . . . . . . 9.2.3. Textile design and sensor integration . . . . . . . . . 9.3. Physiological signal processing . . . . . . . . . . . . . . 9.3.1. Extraction of respiratory waveforms . . . . . . . . . 9.3.2. Automatic heart rate detection . . . . . . . . . . . . . 9.3.3. Heart rate variability . . . . . . . . . . . . . . . . . . 9.3.4. Analysis of experimental results . . . . . . . . . . . . 9.4. Firefighter–robot cooperation, using intelligent clothing.

. . . . . . . . . . .

. . . . . . . . . . .

. . . . . . . . . . .

. . . . . . . . . . .

. . . . . . . . . . .

. . . . . . . . . . .

. . . . . . . . . . .

. . . . . . . . . . .

. . . . . . . . . . .

. . . . . . . . . . .

. . . . . . . . . . .

. . . . . . . . . . .

287 290 290 292 292 294 294 295 297 297 299

x

Automation Challenges of Socio-technical Systems

9.4.1. Robots . . . . . . . . . . . . 9.4.2. Human supervisor interface 9.5. Conclusion . . . . . . . . . . . . 9.6. References . . . . . . . . . . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

301 302 303 304

Chapter 10. Active Pedagogy for Innovation in Transport . . . . . . . . . . .

307

Frédéric VANDERHAEGEN 10.1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.2. Analysis of a railway accident and system design . . . . . . . . . 10.3. Analysis of use of a cruise control system . . . . . . . . . . . . . 10.4. Simulation of a collision avoidance system use . . . . . . . . . . 10.5. Eco-driving assistance . . . . . . . . . . . . . . . . . . . . . . . . 10.6. Towards support for the innovative design of transport systems . 10.7. Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.8. References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

307 308 311 314 316 319 321 322

Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

327

Frédéric VANDERHAEGEN, Choubeila MAAOUI, Mohamed SALLAK and Denis BERDJAG List of Authors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

329

Index. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

333

Introduction

Sociotechnical systems have been studied under different names: human– machine systems, multi-agent systems, systems of systems, complex systems, cyber-physical and human systems. Research studies on such systems encounter similarities in the problems associated with their design, analysis or evaluation. So, is it their level of complexity that justifies human presence or is it the presence of humans that makes them complex? Two themes of research are currently ongoing: one that studies human behaviors to gain a better understanding of their complexity and to provide them with better assistance, and another one that focuses on automation in order to avoid this complexity. The chapters of this book bring together the two types of contributions and result from the work of various research groups: – the Regional Federation for Research in Integrated Automation and Human– Machine Systems (GIS GRAISyHM – Groupe d'Intérêt Scientifique de Recherche en Automatisation Intégrée et Systèmes Homme-Machine) that encompasses all laboratories in automation, computer engineering and signal processing of the Hauts-de-France region; – the Working Group in Automation of Human–Machine Systems (GT ASHM – Groupe de Travail Automatisation des Systèmes Hommes-Machines) within the Research Group on Modeling, Analysis and Control of Dynamic systems (GDR MACS – Groupe de Recherche en Modélisation, Analyse et Conduite des Systèmes dynamiques) of the French National Centre for Scientific Research;

Introduction written by Frédéric VANDERHAEGEN, Choubeila MAAOUI, Mohamed SALLAK and Denis BERDJAG.

xii

Automation Challenges of Socio-technical Systems

– the research communities from the conference on Ergonomics and Advanced Data Processing (ERGO-IA – Ergonomie et Informatique Avancée) that bring together competences in the field of cognitive psychology and cognitive ergonomics, human–machine interactions and systems engineering. The challenges described in this book are grouped into five parts: perceptual capacities, cooperation and task sharing, system reliability, decision modeling and supports, and innovative design. Each part includes two chapters that present key contributions responding to the challenges of automation in sociotechnical systems. Part 1 proposes challenges in relation to the perceptual capacities of a sociotechnical system. Chapter 1 proposes a new approach to detect perceptual blindness. It is based on the synchronization of events such as the occurrence of visual and auditory alarms with heartbeats. Chapter 2 is a multimodal interdisciplinary approach to engineering of multidisciplinary systems. It explains the advantage of multiphysical systems of interaction to facilitate the sensorial perception of human operators in exploratory contexts of industrial maintenance or supervision. Part 2 focuses on the use of automated systems and the role of human operators interacting with these systems as part of human–machine cooperation and the task sharing. Chapter 3 proposes a new context to analyze the trading of authority between humans and machines. This is divided into different levels such as delegation, distribution, sharing or contractualization, and is associated with the definition of degrees of automation of new control or supervision systems in the field of air traffic. Chapter 4 develops the concept of transparency in the design of autonomous vehicles. Application of this concept allows for an easier understanding of the operation of autonomous systems and an increased awareness of the current situation among drivers. Part 3 is based on the principles of technical or human reliability. Chapter 5 is a feasibility assessment of the design of an autonomous vehicle by taking into account the probabilities of possible sensor failures that give redundant information about the environment. Chapter 6 gives details of the new PRELUDE method based on a graphical model for the calculation of human reliability from various factors and from the acquisition of expert knowledge. Part 4 includes two contributions to modeling of sociotechnical systems and decision support. Chapter 7 proposes a fuzzy model as an aid to decision construction in an uncertain environment, applied to the command, control or regulation of

Introduction

xiii

transport systems. In Chapter 8, the concept of human stability is associated with the resilience of sociotechnical systems. Modeling of this, and demonstration of its advantage, are illustrated in the context of a simulation of railway operations. In Part 5, examples of innovative design are proposed. An item of smart clothing is developed in Chapter 9. Its use is illustrated in a context of management of crises with the participation of rescue teams. Chapter 10 describes active pedagogy modules dedicated to innovation in transport.

PART 1

Perceptual Capacities

Automation Challenges of Socio-technical Systems, First Edition. Edited by Frédéric Vanderhaegen, Choubeila Maaoui, Mohamed Sallak and Denis Berdjag. © ISTE Ltd 2019. Published by ISTE Ltd and John Wiley & Sons, Inc.

1 Synchronization of Stimuli with Heart Rate: a New Challenge to Control Attentional Dissonances

1.1. Introduction “It is only with the heart that one can see rightly; what is essential is invisible to the eye” [DES 43]. This quote is the perfect introduction to this chapter, which contains an approach to evaluating the impact of the synchronization of demands with heart beats in terms of the occurrence of attentional dissonances. An accident or an incident is often due to a combination of human, technical, environmental or organizational factors. Two levels of gaps are generally envisaged [VAN 03]: a behavioral gap, for example when the actual behavior of a human operator is not as expected, and a situational gap, for example when the true consequences of behavior are not as expected. To simplify these risk analyses, acceptability thresholds are defined in order to determine whether such and such a gap must be processed. Yet, this approach neglects gaps of low amplitude or frequency that could sometimes provide an explanation of the occurrence of some undesirable events. These margins can, in the long term, be associated with dangerous situations. The concept of dissonance, generated voluntarily or involuntarily, with or without the knowledge of its real or possible consequences, responds to this problem. It allows consideration of imperceptible signals that can generate risks such as these. Chapter written by Frédéric VANDERHAEGEN, Marion WOLFF and Régis MOLLARD.

4

Automation Challenges of Socio-technical Systems

The occurrence of a dissonance can be associated with physiological measurements of pain, stress, boredom or fasting, for example [JOU 87]. It is then evaluated from the electrodermal activity, electroencephalographic activity, acids contained in the blood and related to inanition, or blood pressure. Even if the dissonances in terms of personal conviction do not appear to affect the activity of the heart [JOU 87], the impact of heartbeats on certain emotional factors such as stress, joy or surprise has often been a subject of research. On the other hand, only rarely has research related these heart rhythms to attentional factors such as being awake or perception. It is about treating an imperceptible signal that could have an impact on human performance. This chapter is based on the following assumption: genesis of attentional dissonances can be due to the synchronization of stimuli with heart rate. The first two sections establish a link between different concepts, in other words between human error and dissonance, then between cognitive conflict, attention and attentional dissonance. Following a description of the causes and the evaluations of dissonances, the last two sections describe an exploratory assessment and its results on attentional dissonances, based on the analysis of the heart rate synchronized with the activation of visual and auditory stimuli. The results are promising and make it possible to envisage design of automated systems that are capable of detecting synchronization of this kind and to control any related attentional dissonances. 1.2. From human error to dissonance Human reliability is defined as the capability of a human operator to correctly carry out their prescribed tasks and additional tasks in compliance with predefined conditions, over a time interval or at a given moment in time, for various evaluation criteria such as safety, production or quality of activity, workload or satisfaction [VAN 17a]. Human error is the complementary factor of this: this is the human capability to incorrectly carry out planned or additional tasks under the same conditions. More than 70% of accidents are due to human error and 100% of them are directly or indirectly related to human factors [AMA 13]. Retrospective analysis of accidents leads to the identification of the cause of their occurrence. It allows a list to be drawn up of all the factors that have resulted in the accident occurring and can be added to using prospective assessments in order to anticipate all the possible accident scenarios. The factors that can affect human performance or the safety of a system are called PSFs (performance shaping factors) in the field of human reliability analysis. PSFs allow the characteristics of the human operator to be taken into account, as well as the context and environment that affect their performance in a positive or negative way. In comparison to research work into occurrence

Synchronization of Stimuli with Heart Rate

5

mechanisms of human errors, PSFs that are most often processed are attention, vigilance and workload [VAN 09, RAC 14, RAN 17]. These prospective or retrospective analyses allow new safety barriers to be set up or adaptation of those already in place. However, designing a system to support the control of safety in order to reduce a risk is not sufficient; it is also necessary to assess the possible evolutions of its use and the associated risks. For example, the use of an automatic speed control device can generate behavioral deviations such as the creation of new functions of the technical system, the reduction of the distance between vehicles, or the increase in response time or hypovigilance [DUF 14, VAN 14]. Naive reasoning could lead to the development of automated vigilance control systems to reduce these risks of use of other automated systems. The accident of August 30, 2004 in Rouen [BEA 05] is an example of the limitations of this approach. It was caused by the state of hypovigilance of a driver who went through a red signal and hit the back of a train carriage that was stopped in front of him. In the accident report, it is written: “The automatic driver surveillance system (i.e. the VACMA) of the impacting carriage, although in an operational state and apparently still activated by the driver, was not effective in preventing the accident”. Thus, the driver would be capable of activating these automated systems as a habit, without being vigilant or attentive. It is important to note that the surveillance system can only detect a major incapacity (loss of consciousness) and not a slight incapacity such as a reduction in vigilance [CAB 93, CAB 95]. Many studies have identified situations that are likely to cause human errors or have defined methods to evaluate them quantitatively or qualitatively [VAN 04, VAN 10, HIC 13, SED 13, PAN 16, QIU 17, RAN 17]. However, most of these approaches limit themselves to the assessment of planned or prescribed tasks, and do not take into account additional tasks due to the dynamic evolution of human knowledge over time and the creativity of human operators who can modify the initial functions of a system or invent new ones. In addition, something that is considered to be erroneous by some may become acceptable by the same actors, or be assessed as normal by others: different analysis reference frames exist and must be taken into consideration in the analysis of human errors. Lastly, human error can be seen as a consequence of a malfunctioning system rather than as a cause of an undesirable event, which calls into question the interest of classic accident analyses [REA 00]. The concept of dissonance allows a response to be given to these constraints that discredit the results of analysis of human errors [VAN 17a]. Based on Festinger’s work on cognitive dissonance [FES 57] and Kervern’s work on collective or organizational dissonance [KER 94], a dissonance is a conflict between cognitions, in other words between knowledge or elements of knowledge on an individual,

6

Automation Challenges of Socio-technical Systems

collective or organizational level [VAN 14, VAN 16]. It is an immediate or future disagreement or incoherence in data or information, in perception, processing or interpretation of them, and in the solutions applied to process them, concerning technical, human or sociotechnical systems. A dissonance can be evaluated in terms of instantaneous or long-lasting gaps between what is prescribed, felt, perceived, expected and real. The reference basis used to identify a dissonance may be inexistent, erroneous, unique or multiple. Several conflicting criteria can be processed: for example, conflicts of use, independence, intention, interest, perception, emotion, allocation or attention (see Table 1.1 [VAN 17b, VAN 18]). Dissonance

Criteria

Reference basis

Unprecedented situation

Absence or memory loss of knowledge

None

Serendipity

Conflict of objectives

Erroneous

Lack of independence

Conflict of independence

Tunnel effect or tunnelization

Conflict of perception

Emotional dissonance

Conflict of emotions

Unexpected automation

Conflict of intent

Organizational change

Conflict of information

Difficult decision

Conflict between alternatives

Erroneous cooperation

Conflict of attribution

Overcoming barriers

Conflict between points of view

Affordance

Conflict of use

Competition

Conflict of interest

Social dissonance

Conflict between groups

Anamorphosis

Conflict of perception

Unique

Unique or multiple

Multiple

Table 1.1. Examples of dissonances

A dissonance can be intuitive or can appear with or without prior explanation or justification. It cannot be detected or its detection can be correct or erroneous. Processing of this dissonance can then generate different effects: generation of a new dissonance, increase in workload, increase in the level of discomfort or awkwardness, increase in risk-taking tendencies, increase in personal satisfaction, etc. This is why the acceptability of a dissonance is inversely proportional to the difficulty of processing it. Indeed, if, for example, significant learning effort is required for the latter, the dissonance cannot be acceptable and impact reduction strategies relating to the reinforcement of knowledge can be implemented: rejection

Synchronization of Stimuli with Heart Rate

7

of dissonance, promotion of current knowledge, denial of disturbing knowledge or change of point of view. These knowledge reinforcement methods can be implemented in learning models [VAN 14, ENJ 17]. 1.3. Cognitive conflict, attention and attentional dissonance A cognitive conflict is a temporary incoherence during which at least one limited resource is subject to multiple solicitations or at least two pieces of information contradict each other [DEH 12]. Multiple solicitation can generate an action dissonance, and data incoherences can cause informational dissonance due to changes in organization, for example. During simultaneous execution of tasks, attention resources can be saturated due to their limited capacity because they are managed by short-term memory [KAH 73]. A human operator can then choose to concentrate on one of the tasks, leaving the secondary task to one side: this is the paradigm of selective attention [CHE 53]. Attention is the focus of mental activity on a subset of the perceptive field with selectivity about the information taken in. Its role is therefore to control and orient the activity. It necessarily implies a certain degree of vigilance, bearing witness to the state of wakefulness of the human operator, where vigilance is highly sensitive to ultradian and circadian fluctuations. Two situations can be distinguished: the evaluation of this state at a given time and its evolution over time [MAC 61]. There are two types of attention: the selective type and the sustained type [BAL 96, OKE 06]. Selective attention allows cognitive resources to be focused on priority tasks, whereas sustained attention consists of maintaining it continuously while taking into account modifications that may arise in the tasks to be carried out. These distinctions sometimes lead to very common confusions in the definition of the concepts where attention is then assimilated to an instantaneous state of wakefulness and vigilance to a sustained state of attention. As for workload, this can have an impact on the state of vigilance and consequently on attention. It is defined as the difficulty felt by human operators to carry out tasks as a function of their cognitive, physical and physiological state. Automatic cognitive processes, implemented during usual situations, are differentiated from controlled cognitive processes, activated in the event of more complex situations [SCH 77, SHI 85]. The first are not very costly in terms of attention resources and can be executed concomitantly with a controlled activity. The second are very costly and cannot be implemented simultaneously with another activity that calls on the same process due to limited attention capacities. Blindness to change with analysis of complex visual scenes illustrates the incapacity of a subject to detect the changes from one visual scene to another, even

8

Automation Challenges of Socio-technical Systems

when these changes are significant [REN 97]. Attention is required to detect them correctly, in particular when visual signals disturb the subject’s attention [SIM 05] or during an interruption, even rapid, of visual contact [SIM 98]. Similarly, a subject can be incapable of perceiving an unexpected object while their attention is focused on another object [MAC 98]. This phenomenon of inattentional blindness has been made famous by the experiment using a video in which basketball players exchange passes and an incongruous character slips in among the players during the game [SIM 99]. If the subjects are asked to count the number of passes made, their attention is focused on that, and 46% of them do not perceive the presence of the unexpected person. According to those authors, the probability of perceiving an unexpected object can be related to its similarity with objects on which attention is focused and also on the complexity of control of the task to be carried out. The tunnel effect can be compared to this inattentional blindness. It can occur when the cognitive capacities of the operators are altered, for example during a situation requiring a certain mental effort [WIC 09]. At that time, important information such as visual and/or auditory alarms can be neglected by the operator [DEH 10]. This behavior can lead an operator to focus excessively on an irrelevant set of information to the detriment of critical information such as alarms [DEH 16]. However, the tunnel effect is eliminated by alarms that are coherent to the context, whereas this is not the case when alarms are out of context. The tunnel effect or attentional tunnelization therefore appears to depend on emotions and on the mental workload of the operator. It can also occur during particularly stressful situations, and can induce unsuitable responses for the various situations, depending on the level of stress felt by operators [PIN 11]. An attentional dissonance is a cognitive conflict of attention. It involves the dissonances shown in Table 1.1, of which the causes or consequences are related to an attention failure. These dissonances can be explained by differences between attention that is felt and effective attention, or between the certainty that one will do a good job and the reproach of inattention. The tunnel effect is an example of attentional dissonance because it may be due to an attention failure and generate an overload if the problem associated with it is detected and processed in time, or awkwardness if it is too late. An organizational change is another example of attentional dissonance. Indeed, structural or operational modification to communication media can disturb the search for pertinent information by dispersing attention. The gaps between the attention levels that are required, desired, perceived and real, from a reference frame or various reference frames, lead to the identification of dissonances of this kind. They must help qualify the type of attention conflict: – attention overload: since the demands of the task are very significant, attention resources are not sufficient to process them;

Synchronization of Stimuli with Heart Rate

9

– attention focusing: attention resources are monopolized on one given task or group of tasks, with no provision made for possible new disturbances; – attentional blindness: everything appears to be normal but no human reaction occurs when alarms are activated; – attention diversion: human operators are concentrated on their work but a sudden event that is assimilated to an alert diverts their attention; – attentional disturbance: attention is disturbed by regular events that are not essential. For example, irrelevant information can disturb perception of important messages [LEW 70, POS 98]; – attention dispersion: attention is distributed across several tasks simultaneously, which increases the risk of an error in the perception of important information. 1.4. Causes and evaluation of attentional dissonance Attentional dissonances can be evaluated by applying different methods of evaluation of attention, vigilance, workload, human error or emotion. These approaches allow qualitative, quantitative or subjective indicators about the actual cognitive state. For example, indicators associated with excessive or low solicitation of cognitive resources or associated with the execution frequency of identical actions can affect the vigilance or attention of a human operator, and increase the number of occurrences of human errors [VAN 99, DON 15, MIN 17]. There are various objective indicators for studying mental load, in particular physiological indicators (heart and respiratory rates, endocrine system) and performance indicators (double task paradigm), as well as subjective indicators, such as the National Aeronautics and Space Administration Task Load Index which is more commonly known as the NASA-TLX. The NASA-TLX [HAR 88] leads to evaluation of the subjective workload by carrying out a multidimensional analysis. It is based on six parameters: mental demand, physical demand, time pressure, performance, effort and frustration. When it is difficult to quantify factors, the subjective method visual analog scale (VAS) can be used [CRI 01, TOR 01]. It proposes subjective scales that contain antagonistic terms at their extremities. The estimates carried out from methods such as NASA-TLX or VAS allow an overall load level to be calculated or the parameters associated with this load to be studied separately. Since there appears to be a link between workload, emotion and tunnel effect, emotion self-evaluation methods can be used, including questionnaires for understanding the attitudes of each operator [MAN 07]. The standardized scale Self-Assessment Manikin (SAM) is an example of this [BRA 94]. It demonstrates three dimensions: valence, emotional arousal and dominance, represented by three series of lines of nine manikins. The first series corresponds to the valence dimension. If the subject is in an extreme positive state,

10

Automation Challenges of Socio-technical Systems

they must mark a cross on the manikin that is the furthest to the right, otherwise they must select the one that is the furthest to the left. However, the subject can choose to select intermediary manikins. They must apply the same principles for the dimensions of emotional arousal (very awake/excited for the right-hand manikin; very calm for the left-hand manikin) and dominance of the situation in progress (highly submissive for the left-hand manikin; highly dominant for the right-hand manikin). This scale has been used in a large number of studies and constitutes a simple, rapid and valid means of recording feeling. Research work in neuropsychology that aims to estimate cognitive processes generally requires heavy and sensitive apparatus, physically connected to the body of an individual. As for research in engineering, it has for too long been restricted to research with classic indicators such as the percentage of eyelid closure or the diameter of pupils, without looking into the validation of unusual hypotheses based on other physical or physiological data obtained from evaluations of attention, vigilance or workload. For example, for a long time, the increase in the diameter of pupils has been correlated with the increase in the level of demand for the tasks to be carried out. Thus, the higher the cognitive load, the more the pupils tend to dilate [BEA 82]. Similarly, the difficulty of a problem to be treated causes an increase in the diameter of pupils [LEM 14]. Yet, it has recently been demonstrated that though this hypothesis is verified for cognitive demands, an increase in physical demands reduces this diameter [FLE 17]. In addition, this hypothesis is faced with various problems such as the variation in ambient light, whether medicines or drugs have been taken or the occurrence of strong emotions. Heart activity is often correlated with a level of stress or workload [CAB 03, PIZ 12, BUC 16]. Yet, following stress, these palpitations are felt strongly by a subject, whereas in a normal situation, the latter does not hear them. A recent study looked closely at the fact that the human brain does not perceive the body’s heartbeats [SAL 16]. Thus, it has demonstrated that when an image flashes in a way that is synchronized with the heart rhythm, the activity in the insular cortex is greatly reduced, to the point of causing difficulty or an incapacity of the subjects to perceive flashing shapes. Thus, it is possible that there is a link between attention and these physiological reactions. Estimation of the attention state of human operators and detection of the tunnel effect can be carried out by combining the position of the operators’ gaze and their heart rate as an indicator of stress and workload [PIZ 11]. The use of an eye-tracking device has led to it being shown that if the operator focuses on a target for a long period of time, the probability that they perceive other information diminishes. The attention focus of the operator is then associated with a low rate of ocular saccades and a high rate of ocular fixations.

Synchronization of Stimuli with Heart Rate

11

With respect to external solicitations at work, noise can often disturb or reduce human attention and increase the level of stress. However, listening to music [CHT 15] or increasing the speed of the ticking of a clock [YAM 13] can increase individual physical capacities. Other studies do not find significant differences between the impact of noise or music on these capacities [DAL 07]. Lastly, the silence generated by relaxation techniques can increase the level of attention, in particular, for convalescence situations [PRI 17]. The effects of digestion can also affect vigilance and consequently human attention. Thus, certain studies have demonstrated an improvement of these factors during diets or fasts [FON 13]. Verbal presentation of a problem or a situation can attract the attention of certain people or not produce any effect for others. These are stimuli associated with culture, experience or personal learning. For example, when calling a person by their name, the person turns around but they do not turn around if it is someone else’s name. Being sensitive or indifferent to a given formulation or expression can be exploited to manage attentional dissonances. Methods of analysis of verbalizations can help to identify behaviors and create a link between the perceived state of human operators and the associated PSFs [CAI 16]. 1.5. Exploratory study of attentional dissonances Taking inspiration from the work of Salomon et al. [SAL 16], the aim of this exploratory study was to try to identify whether attentional dissonance can occur for subjects whose mental workload is high, and when the simultaneous appearance of visual and auditory alarms to be detected is synchronous with the heart rhythm. Each alarm was defined by simultaneous appearance, always at the same location on the display screen. It consisted of two flashing squares each with a surface area of 3 cm × 3 cm, amber and red in color. The alarms were systematically accompanied by a specific sound, identical to that used in aeronautics to indicate an abnormality. They were presented in a random manner and distributed over four levels of difficulty, described in the following paragraphs, and the subjects were warned that they could occur. In order to analyze and understand this tunnel effect phenomenon, objective data were collected, such as the heart rate (Hr), using a specific device attached to the subjects’ wrist (the Mio™ watch). This device was also synchronized with the experimental material in such a way as to be able to create similar conditions to those mentioned by Salomon et al. [SAL 16], where the alarms are synchronized to the subject’s heart rhythm.

12

Automation Challenges of Socio-technical Systems

Errors and omissions of tasks to be carried out were also counted for each level. An error was counted as a false alarm when the subject indicated an alarm, but there had been no alarm. As for the subjective data, their goal was to give information about the mental load, and the performances and emotions felt by the participant. They were obtained by means of the NASA-TLX, SAM and VAS methods. The simplified and standardized French version of the NASA-TLX method was used [CEG 09]. Twenty-seven subjects, mostly recruited from within ESTIA (M = 28 years; SD = 8), participated in this experiment. They were randomly divided into two groups depending on whether the appearance of 30 visual and auditory alarms (couples of flashing squares) that they had to detect were configured as synchronous (n = 15) or asynchronous (n = 12) to their heart rhythm. After filling in a questionnaire with the objective of collecting information about their various characteristics (age, visual problems, attention capacities, tiredness, etc.) and having the experiment explained to them, the participants were equipped with the Mio™ watch. The watch read their heart rhythm, with which the appearance of alarms may – or may not – have been synchronized. The watch also allowed the heart rate of subjects at rest to be known in advance. Three short video tests were then presented to them, inspired by the model from Simons and Chabris [SIM 99], in order to carry out an initial verification in relation to the concept of inattentional blindness. The subject was then placed in front of a touchscreen for the experiment, which took place over four levels of increasing difficulty. The first level, with a duration of three minutes, was considered to be an adaptation stage, during which only six alarms were presented. Over the course of the three following levels, with a duration of six minutes each, eight alarms were presented each time. The subject’s main task was to carry out an attention test that consisted of surveying digital events (cursor movements, control of indicator lights). The test used was inspired by the MultiAttribute Task Battery (MATB) [COM 92], frequently used in research into aeronautics to measure the mental load of operators. For level 1, the subject had to survey four cursors and two indicator lights, and then for the three following levels, eight cursors and four indicators. The cursors moved vertically. When one of the cursors came to the end of its track, either upper or lower, the subject had to press the corresponding function button of the touchscreen (F1, F2, F3, F4) to bring the process back to a stable state. One of the indicators was initially lit and the other was turned off. When

Synchronization of Stimuli with Heart Rate

13

one of the buttons changed state (in other words, the unlit indicator turned on or the lit indicator tuned off), the subject had to press the corresponding function button (F5 or F6) to bring the process back to its initial state. If the subject required more than five seconds to react when the cursor reached the end of its track, the process returned to a normal state and they were warned of this by an auditory alarm. The order of appearance of the events was programmed, and was reproduced 11 times over the three-minute period for level 1 and 34 times for each sixminute period for the three following levels, according to a predefined procedure. In addition, for the last two levels, the load increased even more with the appearance of a secondary task: the resolution of a tangram (Chinese puzzle with seven pieces) which broke down more and more quickly between levels 3 and 4 (Figure 1.1). The subject had reconstitute it little by little by occasionally pressing the touchscreen whilst continuing to survey the cursors and display buttons, and also anticipating the appearance of alarms. If the participant did not click on the pieces of the tangram when they began to separate, or if they did not detect that the pieces had become totally separated, the experiment was stopped.

Figure 1.1. Screen display of levels 3 and 4 and appearance of the visual and auditory alarms (two red and amber squares). For a color version of this figure, see: www.iste.co.uk/vanderhaegen/automation.zip

14

Automation Challenges of Socio-technical Systems

As soon as the subject saw the flashing shape accompanied by an auditory signal, they were instructed to press a pushbutton located within reach. As soon as the button was pressed, the auditory signal stopped and the squares disappeared. If it was not pressed within 10 seconds, the subject was considered to have omitted the alarm. Taking into account the fact that between each level, the subject was distracted by requests for evaluation of their actions, it was considered that they could not memorize the various events that had taken place during the test. A possible learning effect was therefore neutralized, which was verified during the experiments. After performing each of the four levels of the experiment, the subject had to evaluate their mental workload using NASA-TLX, and their level of performance, level of confidence in the preceding evaluation, the difficulty of the task and the effort made using the VAS method. These evaluations, graded on a scale from 0 to 100, were presented by means of a touchscreen. The three scales in the SAM method (represented by various pictograms to tick) relating to valence, arousal and dominance were also presented to the subject but only at the start or the end of the experiment (end of the first level and the last level). 1.6. Results of the exploratory study For the two factors in the experiment, meaning the condition (synchronous or asynchronous) and the level (progressive increase in difficulty or the four-point mental workload), the following dependent variables (DV) have been analyzed: – percentage of omissions and errors; – heart rate (Hr); – NASA-TLX scale;

– analogical scales for subjective evaluations (performance, confidence, difficulty, effort, SAM valence, SAM arousal, SAM dominance). Where the data allow it (homogeneity of variances, compliance with normality conditions), ANOVA inferential analysis has been carried out using the various selected parameters [COR 03]. Thus, an ANOVA F-test has allowed the overall effect of each of the two factors to be analyzed. For two-by-two comparisons of the experimental content (concerning the levels), a post hoc test was used (Scheffé test) when the overall effect was

Synchronization of Stimuli with Heart Rate

15

significant. Lastly, when the overall effect of each factor was significant, the effect of interaction was tested. With the exception of the heart rate, the parameters selected to characterize the behavior of the subject, as much from the point of view of the performance as that of their feeling, are sensitive to synchronization of the alarms with the heart rhythm of the subject and to the effect of the complexity of the task. The same goes for the subjective evaluations given by the NASA-TLX, VAS and SAM scales. Figures 1.2–1.8 that are commented on hereafter give average values of the results with a confidence interval of 95% The descriptive statistics of these data indicate that the subjects in the asynchronous condition make an average of 13.30% omissions/errors (SD = 8.53), whereas the subjects in the synchronous condition make an average of 23.11% (SD = 9.14), and the overall effect is significant: F (1,25) = 8.15; p = .008 (Figure 1.2). The same is true for the levels, where the overall effect is significant: F (3,75) = 7.23; p = .0002 (Figure 1.3).

Figure 1.2. Effect of the synchronous/asynchronous condition on errors

Levels N3 and N4 of the experiment, which induce the greatest mental load, encourage subjects to commit the greatest number of errors or omissions. But even though it is true that this number of errors/omissions increases progressively as a function of the level of complexity of the task, the effect is only significant between level 1 and the other levels (Scheffé post hoc tests), and the subjects adopt behavior that is quite homogeneous (equivalent dispersions for N2, N3 and N4). For all levels of mental demand, the subjects of the synchronous condition make more mistakes than the subjects of the asynchronous condition, and the difference is

16

Automation Challenges of Socio-technical Systems

more significant for levels N2 and N3. For the most difficult level N4, the subjects for the synchronous condition increase their rate of errors enormously, whereas those in the asynchronous condition reduce them (Figure 1.4).

Figure 1.3. Effect of the level of mental demand (N1–N4) on errors

Figure 1.4. Relationship between condition and the level of mental demand

Synchronization of Stimuli with Heart Rate

17

The heart rate (Hr) is expressed in beats per minute (bpm). Although for the synchronous condition (M = 84.77, SD = 18.83), the Hr is slightly higher than for the asynchronous condition (M = 82.93, SD = 10.46), the difference is not significant. A larger dispersion occurs for the synchronous condition. Nevertheless, the effect of the difficulty of the task is significant, and a progressive variation exists related to the level of demand, no matter what type of condition the subject is attributed to (F (3,75) = 7.47; p = .0002). This indicates that the experiment has been validated from the point of view of the increase in mental workload (Figure 1.5).

Figure 1.5. Effect of the level of mental demand on the Hr

The NASA-TLX is made up of six parameters to be evaluated, each graded on a scale from 0 to 100: mental demand, physical demand, time demand, evaluation of effort, performance, frustration. For the results presented hereafter, the Raw Task Load Index (RTLX) [BYE 89] was calculated by taking the average of the results obtained for the six dimensions. There is a difference of 2.50 evaluation points for the overall mental load between the group of subjects of the asynchronous condition (M = 51.10, SD = 3.60) and the group of the synchronous condition (M = 48.56, SD = 3.19). On the other hand, for all the subjects, the evaluation between the various levels is significant (Figure 1.6), which tends towards the same conclusion as the results given by the analysis of the Hr (F (3,78) = 54.5791; p = 0.0000).

18

Automation Challenges of Socio-technical Systems

Figure 1.6. Effect of the level of mental demand for the NASA-TLX (RTLX) scale

The VAS used is over four dimensions, each given a mark out of 100: evaluation of performance (VASperf), evaluation of the confidence (VASconf) attributed to the assessment given for performance, evaluation of the difficulty (VASdiff) and the effort made to carry out the task (VASeff). The inferential statistics indicate that the effect of the condition is not significant for VASdiff, VASeff and VASconf. On the other hand, it is significant for the VASperf for levels 2, 3 and 4, where the synchronous subjects believe their performances to be less important than those evaluated by the asynchronous subjects (Figure 1.7a and 1.7b). The variation in performance evaluation between the synchronous condition and the asynchronous condition is very different. Indeed, concerning the synchronous condition, there is a high evaluation of performances for level 1; this diminishes for level 2, also for level 3, whereas for level 4, performance evaluation increases. Concerning the asynchronous condition, the evaluation remains relatively homogeneous from one level to the next. Thus, the subjects do not appear to be aware of the real evolution of their performance in terms of the occurrence of omission errors.

Synchronization of Stimuli with Heart Rate

Figure 1.7a. Evaluation of the dimensions (VAS out of 100) as a function of the condition and for each level of mental demand. For a color version of this figure, see: www.iste.co.uk/vanderhaegen/automation.zip

19

20

Automation Challenges of Socio-technical Systems

Figure 1.7b. Evaluation of the dimensions (VAS out of 100) as a function of the condition and for each level of mental demand (follows previous). For a color version of this figure, see: www.iste.co.uk/vanderhaegen/automation.zip

Synchronization of Stimuli with Heart Rate

Figure 1.8a. Evaluation of valence, arousal and dominance (SAM out of 9) as a function of the first and the fourth levels of mental demand

21

22

Automation Challenges of Socio-technical Systems

Figure 1.8b. Evaluation of valence, arousal and dominance (SAM out of 9) as a function of the first and the fourth levels of mental demand (follows previous)

The SAM scale includes three evaluations, each given a mark from 1 to 9, regarding the “pleasure” that the subject has or has not felt when doing the task (SAM Valence), their state of wakefulness (SAM Arousal) or their dominance or lack of this with respect to the requested task (SAM Domin). There is no significant effect between the two conditions, no matter what kind of evaluation is to be provided. However, from a descriptive point of view, the evaluation of valence at the end of the experience is slightly lower for synchronous subjects (6.50 points vs. 6.20 points), whereas the opposite is true for asynchronous subjects (6.50 points vs. 7.00 points). Concerning the effect of the level of demand, there is no significant effect on valence, but there is an increase in the evaluation of arousal and of dominance between the beginning and the end of the experiment, no matter what the condition is, and the effects are significant: F (1,26) = 12.55; p = .002 and F (1,26) = 9.50; p = .005 (Figures 1.8a and 1.8b). 1.7. Conclusion This chapter has proposed a new approach to analyze dissonances that is based on an attention factor: heart rate. An exploratory study has demonstrated the interest of taking into account the synchronization of visual and auditory alarms with heartbeats to justify errors of perception. Four levels of increasing difficulty were proposed to 27 subjects. This increase in difficulty level is confirmed by the results of evaluation of the overall workload

Synchronization of Stimuli with Heart Rate

23

perceived with the NASA-TLX method. For these four levels, there were no significant differences in the heart rhythm of the participants; there were no atypical subjects for any group (synchronous or asynchronous condition). Over the course of the experiments, 15 subjects carried out the four levels for which activation of the alarms was synchronized with their heart rhythm, and 12 subjects carried them out without synchronization. The quantitative omission errors are significantly higher for the first group than for the second. This difference is again seen with the subjective evaluation of performance using the SAM method and of arousal and dominance with the VAS method. The same level of confidence is attributed to the subjective mark for performance and the difficulty felt for the task to be carried out. The effort made to execute it or the “pleasure” felt during the experiments is similar in both groups. The debriefing carried out after the experiment also allowed future experimental routes to be identified, in that the majority of the subjects for whom the occurrence of alarms was synchronized with their heart rate did not mention these alarms or indicated that they did not have the feeling of having omitted many, on the contrary to those who were asynchronous. These verbalizations are expected to be studied in further detail soon using semantic discourse analysis methods [WOL 05, CAI 16]. The eye-tracking data could also have corroborated these feelings and could also have been related to the data for omissions/errors. Construction of the next experimental devices will need to take these eye-tracking constraints into account. Human operator

Interpretation of the situation

Perception of alarms and performances

Heart rate

+-

Actions

Controlled process

Risk of attentional dissonances

Frequency of alarm activation

Figure 1.9. Example of genesis of attentional dissonances

This first exploratory study opens up interesting possibilities in terms of the design of sociotechnical systems and assessment of the risks associated with attentional dissonances. Figure 1.9 gives an example of genesis of this type of conflict due to synchronization of physiological signals (heartbeats) with information about the process that is being controlled (alarms).

24

Automation Challenges of Socio-technical Systems

Attentional dissonances of this kind can affect perception of alarms, interpretation of them, associated actions or the perception of performances. Future research must lead to a more in-depth look at this hypothesis and at design optimization of visual and auditory alarms to avoid activating them in a way that is synchronized with the rhythms of physiological activity. 1.8. References [AMA 13] AMALBERTI R., “Human error at the centre of the debate on safety”, in AMALBERTI R. (ed.), Navigating Safety, Necessary Compromises and Trade-offs, Theory and Practice, Springer, Berlin, 2013. [BAL 96] BALLARD J.C., “Computerized assessment of sustained attention: a review of factors affecting vigilance performance”, Journal of Clinical and Experimental Neuropsychology, vol. 18, no. 6, pp. 843–863, 1996. [BEA 82] BEATTY J., “Task-evoked pupillary responses, processing load, and the structure of processing resources”, Psychological Bulletin, vol. 91, no. 2, pp. 276–292, 1982. [BEA 05] BEA-TT, Rapport d’enquête technique sur l’accident de tramway survenu à Rouen le 30 août 2004, Report Number BEA-TT-2004-007, 2005. [BRA 94] BRADLEY M.M., LANG P.J., “Measuring emotion: the self-assessment manikin and the semantic differential”, Journal of Behavior Therapy and Experimental Psychiatry, vol. 25, no. 1, pp. 49–59, 1994. [BUC 16] BUCKLEY U., SHIVKUMAR K., “Stress-induced cardiacarrhythmias: the heart–brain interaction”, Trends in Cardiovascular Medicine, vol. 26, pp. 78–80, 2016. [BYE 89] BYERS J.C., BITTNER A.C. Jr., HILL S.G., “Traditional and raw task load index (TLX) correlations: are paired comparisons necessary?”, in MITAL A. (ed.), Advances in Industrial Ergonomics and Safety, Taylor & Francis, London, 1989. [CAB 93] CABON P., COBLENTZ A., MOLLARD R. et al., “Human vigilance in railway and long-haul flight operation”, Ergonomics, vol. 36, no. 9, pp. 1019–1033, 1993. [CAB 95] CABON P., IGNAZI G., MOLLARD R. et al., “Sommeil et vigilance des conducteurs de train”, in VALLET M., KHARDI S. (eds), Vigilance et Transports. Aspects fondamentaux, dégradation et prévention, PUL, Lyon, 1995. [CAB 03] CABON P., MOLLARD R., “Prise en compte des aspects physiologiques dans la conception et l’évaluation des IHM”, in BOY G.A. (ed.), Ingénierie cognitive, Hermes-Lavoisier, Paris, 2003. [CAI 16] CAID S., HAURET D., WOLFF M. et al., “Fatigue study and discourse analysis of French Unhabited Aerial Vehicle (UAV) operators to understand operational issues”, Proceedings of the Ergo’IA 2016 Conference, New York, doi: http://dx.doi.org/10.1145/3050385.305039, 2016.

Synchronization of Stimuli with Heart Rate

25

[CEG 09] CEGARRA J., MORGADO N., “Étude des propriétés de la version francophone du NASA-TLX”, EPIQUE 2009, 5e colloque de psychologie ergonomique, Nice, France, pp. 233–239, 2009. [CHE 53] CHERRY E.C., “Some experiments on the recognition of speech, with one and two ears”, Journal of the Acoustic Society of America, vol. 25, pp. 975–979, 1953. [CHT 15] CHTOUROUA H., BRIKI W., ALOUI A. et al., “Relationship between music and sport performance: toward a complex and dynamical perspective”, Science & Sports, vol. 30, pp. 119–125, 2015. [COM 92] COMSTOCK J.L., ARNEGARD R.J., The multiattribute task battery for human operator workload and strategic behavior research, Technical Report 104174, NASA Langley Research Center, Hampton, United States, 1992. [COR 03] CORROYER D., WOLFF M., Analyse statistique des données en psychologie : concepts et méthodes de base, Dunod, Paris, 2003. [CRI 01] CRICHTON N., “Visual analogue scale (VAS)”, Journal of Clinical Nursing, vol. 10, no. 5, p. 706, 2001. [DAL 07] DALTON B.H., BEHM D.G., “Effects of noise and music on human and task performance: a systematic review”, Occupational Ergonomics, vol. 7, pp. 143–152, 2007. [DEH 10] DEHAIS F., TESSIER C., CHRISTOPHE L. et al., “The perseveration syndrome in the pilot’s activity : guidelines and cognitive countermeasures”, in PALANQUE P., VANDERDONCKT J., WINCKLER M. (eds), Human Error, Safety and Systems Development, Springer, Berlin, pp. 68–80, 2010. [DEH 12] DEHAIS F., Conflits et persistance dans l’erreur : une approche pluridisciplinaire, HDR, ISAE and Université Paul Sabatier, Toulouse, 2012. [DEH 16] DEHAIS F., FABRE E., ROY R., Cockpit intelligent et interfaces cerveau-machine passives, OATAO, Toulouse, 2016. [DES 43] DE SAINT-EXUPÉRY A., Le Petit Prince, Reynal & Hitchcock, New York, 1943. [DON 15] DONALD F.M., DONALD C.H.M., “Task disengagement and implications for vigilance performance in CCTV surveillance”, Cognition, Technology & Work, vol. 17, no. 1, pp. 121–130, 2015. [DUF 14] DUFOUR A., “Driving assistance technologies and vigilance: impact of speed limiters and cruise control on drivers’ vigilance”, Seminar on the Impact of Distracted Driving and Sleepiness on Road Safety, Paris La Défense, 2014. [ENJ 17] ENJALBERT S., VANDERHAEGEN F., “A hybrid reinforced learning system to estimate resilience indicators”, Engineering Applications of Artificial Intelligence, no. 64, pp. 295–301, 2017. [FES 57] FESTINGER L., A Theory of Cognitive Dissonance, Stanford University Press, Stanford, 1957.

26

Automation Challenges of Socio-technical Systems

[FLE 17] FLETCHER K., NEAL A., YEO G., “The effect of motor task precision on pupil diameter”, Applied Ergonomics, vol. 65, pp. 309–315, 2017. [FON 13] FOND G., MACGREGOR A., LEBOYER M. et al., “Fasting in mood disorders: neurobiology and effectiveness: a review of the literature”, Psychiatry Research, vol. 209, no. 3, pp. 253–258, 2013. [HAR 88] HART S.G., STAVELAND L.E., “Development of NASA-TLX (Task Load Index): results of empirical and theoretical research”, Advances in Psychology, vol. 52, pp. 139–183, 1988. [HIC 13] HICKLING E.M., BOWIE J.E., “Applicability of human reliability assessment methods to human-computer interfaces”, Cognition Technology & Work, vol. 15, no. 1, pp. 19–27, 2013. [JOU 87] JOULE R.-V., “La dissonance cognitive : un état de motivation ?” L’année psychologique, vol. 87, no. 2, pp. 273–290, 1987. [KAH 73] KAHNEMAN D., Attention and Effort, Prentice Hall, Englewood Cliffs, 1973. [KER 94] KERVERN G.-Y., Latest advances in cindynics, Economica, Paris, 1994. [LEM 08] LEMERCIER C., CELLIER J.-M., “Les défauts de l’attention en conduite automobile”, Le Travail Humain, vol. 71, pp. 271–296, 2008. [LEM 14] LEMERCIER A., Développement de la pupillométrie pour la mesure objective des émotions dans le contexte de la consommation alimentaire, PhD thesis, Université Paris VIII Vincennes, 2014. [LEW 70] LEWIS J.L., “Semantic processing of unattended messages using dichotic listening”, Journal of Experimental Psychology, vol. 85, no. 2, pp. 225–228, 1970. [MAC 61] MACKWORTH N.H., “Researches on the measurement of human performance”, in SINAIKO H.W. (ed.), Selected Papers on Human Factors in the Design and Use of Control Systems, Dover, New York, 1961. [MAC 98] MACK A., ROCK I., Inattentional Blindness, MIT Press, Cambridge, MA, 1998. [MAN 07] MANDRYK R.L., ATKINS M.S., “A fuzzy physiological approach for continuously modeling emotion during interaction with play technologies”, International Journal of Human-Computer Studies, vol. 65, no. 4, pp. 329–347, 2007. [MIN 17] MINITRA D., McNEESE M.D., “Predictive aids can lead to sustained attention decrements in the detection of nonroutine critical events in event monitoring”, Cognition Technology & Work, vol. 19, no. 1, pp. 161–177, 2017. [OKE 06] OKEN B.S., SALINSKY M.C., ELSAS S.M., “Vigilance, alertness, or sustained attention: physiological basis and measurement”, Clinical Neurophysiology, vol. 117, pp. 1885–1901, 2006. [PAN 16] PAN X., LIN Y., HE C., “A review of cognitive models in human reliability analysis”, Quality and Reliability Engineering International, vol. 33, no. 7, pp. 1299–1316, 2016. [PIN 11] PINET J., Traitement de situations inattendues d’extrême urgence en vol : test d’un modèle cognitif auprès de pilotes experts, PhD thesis, Université Toulouse II le Mirail, 2011.

Synchronization of Stimuli with Heart Rate

27

[PIZ 11] PIZZIOL S., DEHAIS F., TESSIER C., “Towards human operator state assessment”, Proceedings of the 1st International Conference on Application and Theory of Automation in Command and Control Systems, pp. 99–106, IRIT Press, Toulouse, 2011. [POL 12] POLET P., VANDERHAEGEN F., ZIEBA S., “Iterative learning control based tools to learn from human error”, Engineering Applications of Artificial Intelligence, vol. 25, no. 7, pp. 1515–1522, 2012. [POS 98] POSNER M.I., RAISCLE M.E., L’esprit en images, De Boeck, Paris, 1998. [PRI 17] PRINCE-PAUL M., KELLEY C., “Mindful communication: being present”, Seminars in Oncology Nursing, vol. 33, no. 5, pp. 475–482, 2017. [QIU 17] QIU N., RACHEDI N., SALLAK M. et al., “A quantitative model for the risk evaluation of driver-ADAS systems under uncertainty”, Reliability Engineering and Safety System, vol. 167, pp. 184–191, 2017. [RAC 14] RACHEDI N., BERDJAG D., VANDERHAEGEN F., “Détecter la somnolence des conducteurs dans le transport terrestre : état de l’art”, Journal Européen des Systèmes Automatisés, vol. 48, nos 4–6, pp. 421–452, 2014. [RAN 17] RANGRA S., SALLAK M., SCHÖN W. et al., “A graphical model based on performance shaping factors for assessing human reliability”, IEEE Transactions on Reliability, vol. 66, no. 4, pp. 1120–1143, 2017. [REA 00] REASON J., “Human error: models and management”, British Medical Journal, vol. 320, pp. 768–770, 2000. [REN 97] RENSINK R.A., O’REGAN J.K., CLARK J.J., “To see or not to see: the need for attention to perceive changes in scenes”, Psychological Science, vol. 8, no. 5, pp. 368–373, 1997. [SAL 16] SALOMON R., RONCHI R., DÖNZ J. et al., “The insula mediates access to awareness of visual stimuli presented synchronously to the heartbeat”, Journal of Neuroscience, vol. 36, no. 18, pp. 5115–5127, 2016. [SCH 77] SCHNEIDER W., SHIFFRIN R.M., “Controlled and automatic human information processing: detection, search, and attention”, Psychological Review, vol. 84, pp. 1–66, 1977. [SED 13] SEDKI K., POLET P., VANDERHAEGEN F., “Using the BCD model for risk analysis: an influence diagram based approach”, Engineering Applications of Artificial Intelligence, vol. 26, no. 9, pp. 2172–2183, 2013. [SHI 85] SHIFFRIN R.M., SCHNEIDER W., “Automatic and controlled processing revisited”, Psychological Review, vol. 91, no. 2, pp. 269–276, 1985. [SIM 98] SIMONS D.J., LEVIN D.T., “Failure to detect changes to people during a real-world interaction”, Psychonomic Bulletin & Review, vol. 5, no. 4, pp. 644–649, 1998. [SIM 99] SIMONS D.J., CHABRIS C.F., “Gorillas in our midst: sustained inattentional blindness for dynamic events”, Perception, vol. 28, no. 9, pp. 1059–1074, 1999. [SIM 05] SIMONS S.J., AMBINDER M.S., “Change blindness: theory and consequences”, American Psychological Society, vol. 14, no. 1, pp. 44–48, 2005.

28

Automation Challenges of Socio-technical Systems

[TOR 01] TORRANCE G.W., FEENY D., FURLONG W., “Visual analog scales: do they have a role in the measurement of preferences for health states?”, Medical Decision Making, vol. 21, pp. 329–334, 2001. [VAN 99] VANDERHAEGEN F., “Toward a model of unreliability to study error prevention supports”, Interacting with Computers, vol. 11, pp. 575–595, 1999. [VAN 03] VANDERHAEGEN F., Analyse et contrôle de l’erreur humaine, Hermes-Lavoisier, Paris, 2003. [VAN 04] VANDERHAEGEN F., JOUGLET D., PIECHOWIAK S., “Human-reliability analysis of cooperative redundancy to support diagnosis”, IEEE Transactions on Reliability, vol. 53, pp. 458–464, 2004. [VAN 09] VAN ELSLANDE P., JAFFARD M., FOUQUET K., FOURNIER J.-Y., De la vigilance à l’attention – Influence de l’état psychophysiologique et cognitif du conducteur sur les mécanismes d’accident, INRETS Report Number 280, 2009. [VAN 10] VANDERHAEGEN F., “Human-error-based design of barriers and analysis of their uses”, Cognition Technology & Work, vol. 12, pp. 133–142, 2010. [VAN 14] VANDERHAEGEN F., “Dissonance engineering: a new challenge to analyse risky knowledge when using a system”, International Journal of Computers Communications & Control, vol. 9, no. 6, pp. 750–759, 2014. [VAN 16] VANDERHAEGEN F., “A rule-based support system for dissonance discovery and control applied to car driving”, Expert Systems with Applications, vol. 65, pp. 361–371, 2016. [VAN 17a] VANDERHAEGEN F., CARSTEN O., “Can dissonance engineering improve risk analysis of human-machine systems?”, Cognition Technology & Work, vol. 19, no. 1, pp. 1–12, 2017. [VAN 17b] VANDERHAEGEN F., “Toward increased systems resilience: new challenges based on dissonance control for human reliability in Cyber-Physical&Human Systems”, Annual Reviews in Control, vol. 44, pp. 316–322, 2017. [VAN 18] VANDERHAEGEN F., JIMENEZ V., “The amazing human factors and their dissonances for autonomous Cyber-Physical&Human Systems”, First IEEE Conference on Industrial Cyber-Physical Systems, Saint Petersburg, Russia, pp. 597–602, 14–18 May 2018. [WIC 09] WICKENS C.D., ALEXANDER A.L., “Attentional tunneling and task management in synthetic vision displays”, International Journal of Aviation Psychology, vol. 19, no. 2, pp. 182–199, 2009. [WOL 05] WOLFF M., BURKHARDT J.M., DE LA GARZA C., “Analyse exploratoire de “points de vue” : une contribution pour outiller les processus de conception”, Le Travail Humain, vol. 68, no. 3, pp. 253–284, 2005. [YAM 13] YAMANE S., MATSUMURA N., “The clock ticking changes our performance”, Shikakeology: Designing Triggers for Behavior Change: Papers from the AAAI Spring Symposium, Stanford, United States, pp. 113–118, 25–27 March 2013.

2 System-centered Specification of Physico–physiological Interactions of Sensory Perception

2.1. Introduction The resilience capacity of a socio-technical system is an example of multi-domain requirement [LEV 17, VAN 17] for which justification can be found in the role of humans in the control loop having to a priori face with the unexpected in operational situations [GAL 06, BOY 11]. Moreover, the recurring observation of systemic failures a posteriori [BOA 13, BOY 14] raises questions about what enables the tangible togetherness (Chapter 7 of [BOA 08]) when natural – physical and human – and artificial “parts” are related to form a whole. Consequently, there is a consensus about the need to combine a priori the domain of multidisciplinary engineering knowledge, respectively human-centered and technique-centered [RUA 15], in order to satisfy socio-technical requirements in a system project. Rather than proposing an integrative framework, or even an ontology, of elements of these bodies of multidisciplinary knowledge of these fields (which are, however, bounded by their respective specialist engineering skills), in section 2.2, we present the constitutive elements of a simplex orchestration [BER 09] of the interdisciplinary knowledge1 in a system project. It is the situation system, and first a

Chapter written by Jean-Marc DUPONT, Frédérique MAYER, Fabien BOUFFARON, Romain LIEBER and Gérard MOREL. 1 Meaning a whole knowledge resulting from the orchestration of the partial multidisciplinary knowledge to {encode → decode} a situation of interest [ROS 12]. ←

Automation Challenges of Socio-technical Systems, First Edition. Edited by Frédéric Vanderhaegen, Choubeila Maaoui, Mohamed Sallak and Denis Berdjag. © ISTE Ltd 2019. Published by ISTE Ltd and John Wiley & Sons, Inc.

30

Automation Challenges of Socio-technical Systems

certain situation reality itself, which is integrative of the multidisciplinary knowledge assets required to dynamically form the interdisciplinary knowledge which will satisfy, in our case study, a requirements specification of operating safety targeting an “Artifact–Human”2 interactive control of a critical industrial process. This study context emulates in a plausible manner this type of system requirement, known as critical due to the nature of the “flowing matter-energy” to control and which can be the source of exogenous physical phenomena. In section 2.3, we are particularly interested in what enables the tangibility of the togetherness of the sensory perception interaction between an “artifact-source” of an alarm and a “human-sink” of control, as a necessary condition, but not the only one, to afford the required functionality in an operational situation. Formalizing the physico–physiological interaction under investigation combines certain elements of multidisciplinary knowledge of integrative physiology [CHA 95] and of perception/action [BER 06] in order to specify measurable properties enabling the verification of the targeted auditory perception requirements. The result in section 2.4 of this interdisciplinary system-centered specification of an interaction of sensory perception is an executable model that is verified in its own multidisciplinary knowledge. Its validation as a system constituent model results from a concurrent execution with all the multidisciplinary models that composed the system architecture together, aiming to respond to the situation system emulated by our experimentation platform (Figure 2.1). In conclusion, this work contributes to the most recent recommendations “[…] to provide modelling and simulation from the very beginning of a design project” [BOY 14]. Generally, they raise awareness on the tangibility prerequisite that enables togetherness between physical, technical and human parts of systems that leave an increasingly broader role to “digital dematerialization”. In a complementary manner, the proposed cognitive process of simplex orchestration opens up other perspectives for deployment to ensure interdisciplinary knowledge harmony such as, for example, that of a model-based systems engineering project as well as of a “learning by doing” educational body.

2 Rather than “Man–Machine” to mark our integrative prerequisite assumption through physics.

System-centered Specification of Physico–physiological Interactions

31

Figure 2.1. System context of the model-based interdisciplinary specification of the targeted physico–physiological interaction of sensory perception studied [BOU 16]. For a color version of this figure, see: www.iste.co.uk/vanderhaegen/automation.zip

2.2. Situation-system-centered specification of a sensory perception interaction

“System” is currently the most popular unit used to objectify the reality of a situation that is perceived to be complex [PÉN 97] in order to require a priori or recover a posteriori an overall cohesion between elements that operate together with a common purpose in a given situation. We note that most often when “system” is mentioned, distinction is not made concerning the nature of the constituent elements, whether they are implicitly natural and human or explicitly artificial. With this in mind, the question of conceptualization of “what is a system?” is continuing to be raised by a wide community of researchers and engineers, from the originating project of a “general system theory” [POU 13] and its interactions with cybernetics [WIE 48, FOR 61] up to the most recent work of the worldwide community of systems engineering [BKC 17]. It is, therefore, possible to agree on the fact that the “whole–part” decoding} relationship of a reality [ROS 12] is not as trivial as it seems {encoding → ← [BOA 13]. Thus, it is necessary to inquire about its outwards materiality as an integrative foundation of multidisciplinary systems engineering, respectively technical and human-centered. This system-centered informal representation of a local reality is originally built intentionally by a “sentient being” who is aware that a

32

Automation Challenges of Socio-technical Systems

real situation requires a certain wholeness. Its formal representation must be looked decode}, into in more detail at another moment of engineering in order to {encode → ← by extension, the source–sink phenomenological potential of interactions, which are not only causal and top-down (endogenous) but also non-causal and bottom-up (exogenous). The resulting situation system is a designation prerequisite to a respondent system of interest as a precondition in the definition. Our work is based on a pragmatic body of interdisciplinary knowledge constitutive of a system-centered architecting specification, in harmony with human-centered and technical-centered multidisciplinary knowledge. We present this process of system specification for our study context around the targeted sensory interaction in such a way as to focus the multidisciplinary modeling on the situation system model that constitutes the interdisciplinary visibility of a reality to architect a respondent system. 2.2.1. Multidisciplinary knowledge elements in systems engineering We subscribe to the recent pragmatic vision of Lawson [LAW 10] who mainly raises the question of “why do Humans make systems?” when a problem or an opportunity results from the interaction of at least two elements in a perceived or targeted situation. In our case, this opportunity results from the change of paradigm that is targeted by the new Industry 4.0 era, relating to the digitalization of the interactivity between operation and engineering system assets. With this in mind, the recent work of the “connexion”3 project (Figure 2.2) aims to demonstrate that it is a relational decoding} relations [ROS 12]) that continuum of information (Rosen {encoding → ← should result from a system approach, whereas the usual “divide-and-conquer” principle of the multidisciplinary approach leads rather to knowledge and skills in silos to integrate. One of the main drawbacks is a combinatorial heterogeneity of representations and of the HMI (human–machine interface) that goes against unifying mental control patterns. Our work focuses more precisely, at the scale factor of our case study, on certain aspects of the impact of this digitalization on the architecting interdisciplinary process of an interactive-aided control system of an industrial process. The resulting collaborative orchestration is based on multidisciplinary models that can be executed

3 Innovative architecture for command and control platforms adapted to nuclear power plants; www.cluster-connexion.fr.

System-centered Specification of Physico–physiological Interactions

33

together to specify by successive refinements the control of the situation system of interest. Initially, we present certain elements of multidisciplinary knowledge in systems engineering in the broad sense of {concept, theory, model, method, methodology, language and tool} that are constituent, secondly, of our proposition for collaborative orchestration of a system-centered interdisciplinary specification.

Figure 2.2. Towards coupling between multidisciplinary situations of operation and engineering of a process control system [GAL 12]

2.2.1.1. Technical-centered engineering The “Guide to the Body of Knowledge in Systems-engineering” regularly makes international publications [BKC 17] on the variety of techniques available that have also been the subject of publications on a national scale, such as Discovering and understanding systems engineering [MÉN 12] and Engineering and architecture of multidisciplinary systems [FAI 12], under the expertise of active members within INCOSE and AFIS4. This body relies on certain generic principles of a systemic approach [BER 68] to apply standardized engineering processes in a recursive, iterative and concurrent manner, scheduled by project management templates. The 4 International Council on Systems Engineering, www.incose.org; French equivalent: French Association for Systems Engineering, www.afis.fr.

34

Automation Challenges of Socio-technical Systems

model-based systems engineering approach aims to contrast with the traditional document-based approach by replacing the project basic artifact of “process” by that of “model” without, however, clearly addressing this tautological difference since it is always a case of homomorphic representation of a reality [LEM 95]. The perspectives that structure the modeling approach focus especially on the exploration of the solution space in such a way as to satisfy the operational users’ needs of a required system, prior to detailed architecting design of its components and their assembly. The allocation of an operational architecture and of its alternatives onto a physical architecture of constituents according to several levels of top-down and bottom-up implementation/abstraction is part of industrial best practices that we do not present in this chapter (Figure 2.3). Much effort is made to ensure the ontological harmony of these multiple representations in order to reach certain a multidisciplinary interoperability of knowledge [GIO 15]. Their multidisciplinary orchestration may also be carried out with a unified language of system-oriented modeling, such as the de facto standard SysML [CLO 15] or domain-oriented modeling, for example, multiphysics [RET 15], to translate the intention of the system architect to the design.

Figure 2.3. Coupling relations between systems architecting levels. For a color version of this figure, see: www.iste.co.uk/vanderhaegen/automation.zip

Let us note that this orchestration metaphor expresses, among other things, the nonlinear reasoning [KRO 14] of architecting a system. This very essence of systems engineering can be constrained by a procedural organization of a project that would not correctly articulate the sequential management of the system development with the concurrent management of the operational processes of multidisciplinary engineering (Chapter 6.3, Volume 1 in [FAI 12]). Here, a different

System-centered Specification of Physico–physiological Interactions

35

hindrance to scientific harmony between communities comes into play which can limit the human capacities for heuristic resolution of the problem/solution coupling [NUG 15]. 2.2.1.2. Human-centered engineering Human factors are an essential part of the engineering of a technical system, mainly in studies of operationality [VOI 18] and experience feedbacks. In this sense, our first works [LIE 13] were related to the nature of an “Artifact–Human”5 interaction {I } of an anthropocentric approach that is synthesized by the AUTOS model (Figure 2.4, middle). Due to experience feedback, the satisfaction of the function of control {T} of an artifact {A} by a field operator {H} in an organized situation {O} of a maintenance support system {S} is highlighted a posteriori using the color orange (Figure 2.4, top left). It should also be noted that from a technical point of view, the multidisciplinary specification of operational security that requires the functional and technological independence of the two constituents for closing and locking this artifact has not necessarily ensured a priori the logical conjunction {∧} of their operationality. The proof given with the “B” method thus leads us to question both the system-centered organization of the project and the written text of procedures that does not necessarily capture the formal building of the relative action sequences as actinomies.

Figure 2.4. Operational situations of multidisciplinary specification of sensory perception interaction. For a color version of this figure, see: www.iste.co.uk/vanderhaegen/automation.zip 5 This order expresses the physical materiality on which we base our approach.

36

Automation Challenges of Socio-technical Systems

This work focused on specifying the physical quantity of “good orange color (photons)” that the artifact must provide so that the “afforded” functionality, among others, is “correctly perceived” in order to “correctly act by detour” [BER 06] in a situation. Indeed, we recommend that study of “Artifact–Human” interactions requires an organic understanding of the transmission conditions of symbolic properties between a technical object “source” and a human object “sink” – } – to be able to specify measurable requirements written {affordance of the technical source. We therefore suggest studying the physico–physiological nature of this transmission and looking at the conditions under which the physiological processes take place on the biological substrate, based on work in perception and action physiology as well as in integrative physiology. Beyond this result, this work reinforced the need to system center a priori the multidisciplinary organization of engineering that acts as architect in each of its respective domains but not always in an interdisciplinary way in relation to reality. We then explored this system dimension in the context of an extension of the problem frames approach in software engineering to socio-technical systems [HAL 05]. This approach demonstrates that the specification of a respondent system {S } to a system requirement {R } of a domain of interest {W } must formally satisfy three concurrent specifications (Figure 2.4, right) according to the logical entailment├: {W , S ├ R }, with I ⋀ I

∧ I → S

The technical-centered specification {I } prescribes the control {PC} of the process dynamics {PO} of an artifact in situation {W } depending on the predicate: W ∧ P ∧ P → I The human-centered specification {I } prescribes the human capacity {HO} to control {KC} an artifact in situation {WoI} depending on the predicate: W ∧ K ∧ H → I The interface-centered specification {I } prescribes the technical capacity {IC} of an artifact to interact with a human {HO} in situation {WoI} depending on the predicate:

System-centered Specification of Physico–physiological Interactions

37

W ∧ I ∧ H → I It should be noted that the technical specification must contain, in the absence of being able to control it, unpredictable interactions between reality {W } and the human {H}. Let us now note the genericity of this interface problem {IC} that we encounter, for example, with the analogical buttons of control “Turn Push Lights” [CHÉ 12] whose digital substitution must conserve the affordance functionality of a situation of states that reflect a situation system in the human-centered paradigm of power plant control (Figure 2.4, bottom left). The same applies to other senses involved to build all the mental patterns of situation, monitoring and action such as: – touch to perceive the vibrations of a pump in operation, to operate manual valves, etc.; – hearing to perceive the “gargling” of water in a pipe, the whistling of air during ventilation operations, an alarm in the control room or in the field, etc.; – sight to perceive a water level in a tank, an indicator, an analog/digital display, etc. In section 2.3, we give details of the study of a socio-technical situation of “Artifact–Human” interaction involving the sound perception of an alarm by a human operator, and we underline the prerequisite for right perception of this signal, described as sensory affordance adapted from Hartson [HAR 03] (Table 2.1). Type of affordance

Description

Cognitive

The characteristic tone of a sound Design feature that helps users in alarm allows the operator to be knowing something warned of an incidental or accidental situation

Physical

The periodicity of the alarm is the most pertinent parameter to describe Design feature that helps users in various categories of urgency from doing a physical action the alert to event confirmation [SUI 07]

Sensory

Design feature that helps users sense something

Example

The sound power of an alarm allows the necessary but not sufficient conditions of perception of the alarm to be characterized (section 2.3)

38

Automation Challenges of Socio-technical Systems

Functional

In an incidental situation, the sound alarm is triggered to alert the control Design feature that helps users operator (who will follow a accomplish procedures of control procedure of control activity to activity re-establish the system in a secure operational state) Table 2.1. Types of affordance for a physico–physiological interaction of auditory sensing

Our approach to the measurability of “Artifact–Human” interaction properties aims to be able to anticipate certain human factors as early as possible by modeling and simulation (section 2.4) in order to limit in fine the feedback for the type of situations targeted. This is in line with the 2025 vision of systems engineering [INC 14] concerning anticipation of performances of critical systems and mastery of their development as early as possible in the project, in order to transform system virtual models into reality. Finally, it should be noted that the human factors also relate to organization of the system project for which it seemed appropriate to us to better exploit the human capacity to face, by detour, the dilemma of a system architect having to allocate a system function to a human or technical agent, or even to both (Designer’s Dilemma, in [MIL 14]), especially since the perception of knowledge by a project stakeholder with regard to another project stakeholder only very partially reflects the dynamics of their respective knowledge. In our opinion, this is a relevant issue for building an interdisciplinary knowledge. 2.2.2. Interdisciplinary knowledge elements in systems engineering Therefore, it is important to note that many systems engineering stakeholders do not in practice have sufficient (direct) perception of the reality of a situation which is nevertheless the primary phenomenological source of measurability of the requirement properties for verifying and validating a system specification (“do the right job right” [FAN 12]). The consequence of this non-visibility is a system-questionable added value to the functional (the system concept) architecting [LEM 95] that therefore limits the added value to the ontological architecting (the system being), with the result of implementing systemic patches in return of operation. There is in fact confusion between the process and the result of a specification, limited most often to the single description/prescription of what is decoding} of a situation system expected, by default to translate {encoding → ← sufficiently.

System-centered Specification of Physico–physiological Interactions

39

2.2.2.1. System-centered orchestration of multidisciplinary knowledge We primarily focus our work on exploring the constituent problem space of a situation system that depends on abstraction aptitudes for thinking in systems [ALL 16]. We support this heuristic pathway by the mental schema of the holonic paradigm (Figure 2.5, left) that is particularly appropriate for reassessing the uncertainty of a situation in order to explore its non-ordered domain beyond what is already known [KUR 03], such as the interactions implicitly considered to be contained by the system surroundings but perhaps not. This holonification of a situation system, as a possible filter for all the conceptualizations of a reality, enables one to “design for the unexpected” [VAL 17] a holarchical system architecture {holon, Holon} that considers any object of interest as potentially constituent {h} of a composite object {H} until it becomes an elementary component of a realized solution architecture [JIN 06]. Thus, we understand the powerful modeling based on “holon” [KOE 67] as a constituent block of the system DNA [BOA 09a] for orchestrating in a centripetal way an interdisciplinary specification. For example, a control operator is a human-centered part {hhuman} of a control artifact {Hartifact} of an operational system, as well as a human-centered part {hhuman} as a “sentient being” whole (Wilber, in Chapter 1 of [MEL 09]) {HHuman} originally requiring an evolution of a situation system. This is therefore the case for our study hypothesis which found the architecting togetherness on the physical tangibility of the {hartifact-hhuman} sensory interaction as necessary but not a sufficient prerequisite to keep the system whole. This binding into unity (oneness) is generalized around the “flowing object” which gives life into being to the system in a diachronic way – over time – through the stimulated interactions, as a complement to the usual synchronic way – between instants – (refer to [GAL 99] to understand the importance of this two-dimensional architecting requirements in practice). The objective is to model the system dynamics as early as possible, not only of negative causality loops (balance) but also of positive causality loops (reinforcement) in order to designate what must be controlled or can be contained by an “artifact–human” whole. The designation of a system of interest in the form of systemigrams and causal loop diagrams results in an architecture-solution draft, the wholeness of which can be ordered according to the 21 concepts of the conceptagon (Figure 2.5, right). We understand that delimiting the boundary between the perceived situation system and its broader surroundings is not trivial and results from a dynamic refinement during the building of an interdisciplinary knowledge of the targeted reality.

40

Automation Challenges of Socio-technical Systems

Figure 2.5. System-thinking for system-centering the architecture. For a color version of this figure, see: www.iste.co.uk/vanderhaegen/automation.zip

The coupling diagram proposed by [LAW 10] points out, in particular, the control loop that a respondent system has to keep in wholeness relative to a specified situation system. The interpretation (Figure 2.6) that we make of it goes further into detail about the cognitive nature of the coupling relations between multidisciplinary and interdisciplinary engineering knowledge that together individually provide a partial response and together an holistic response to the requiring operational situation system. In addition, we detail the specifying nature of these systemM } between requiring “problem space” and centered coupling relations {R → ← responding “solution space” in order to orchestrate an interdisciplinary knowledge { , , , , } { } between the various constituent multidisciplinary blocks. Our proposed orchestration results from an integrative process of multidisciplinary knowledge that is potentially available over the course of the refinement process of the specifications of a system project. This integration dynamics results from an allocation process {C ↔ A } of available multidisciplinary knowledge assets { {

, , , , }

{

, , , , , ,

,

}

} that become appropriate

} throughout the specification refinement process. We note, for { example, that { } recursively works as a control-artifact architect with regard to hardware-centered { } and software-centered { } multidisciplinary knowledge. With this in mind, the same goes for { } and { } with regard to component human-centered and physics-centered multidisciplines.

System-centered Specification of Physico–physiological Interactions

41

Figure 2.6. Cognitive and specifying interpretation of the coupling diagram. For a color version of this figure, see: www.iste.co.uk/vanderhaegen/automation.zip

We argue that the organization of the system project must consequently align itself with the targeted situation system wholeness to orchestrate the multidisciplinary “problem–solution” alternatives that are characteristic of the system architecting rationale. Thus, we have considered that the unique human property, called simplexity, of making the most likely decision by “detour” [BER 09] when faced with complex operational situations must also be the privileged constituent of the interdisciplinary orchestration of the engineering system by benefitting from collaborative working technologies. This orchestration metaphor promotes human-in-the-loop inquiry, analysis and synthesis of a suitable system-in-the-loop solution, with the difficulty that the partition must be specified (written) in a collaborative manner over the course of interdisciplinary knowledge building. We noted, as also demonstrated by Retho [RET 15] in multiphysics system design, that this dynamics of architecting a system also points out multidisciplinary impact factors beyond the basic ones. In feedback of our experiments, we have also observed the need to enable multidisciplinary interactions to orchestrate themselves between disciplinary knowledge assets, to a certain extent in a “centrifugal way” not directly aligned with the “centripetal way” required by the system project. It is the system under architecting specification, and the situation system in our case, which is the interdisciplinary integrator of these multidisciplinary contributions, which can become company ontology constituents in response to

42

Automation Challenges of Socio-technical Systems

recurring situations of interest to control in the broad sense. In the following sections, a scenario exemplifies certain “simplex” requesting–responding loops of this interdisciplinary orchestration process. 2.2.2.2. Model-based system-centered specification We have adapted the interpretation of a specification addressed in software engineering by the problem frames approach to our domain [DOB 10]. The result of the system-centered engineering process is a specification in a world of interest {W } if a proof is given that an implementation of this solution {S }: – from a responding solution space of system-centered engineering-knowledge {K }; – in the requiring problem operational-knowledge K ;

space

of

operational

situation-centered

satisfies the requirements of interest {R } by successive iterations according to the entailment relationship: {W , S ├ R } [JAC 97]; K , S ├ R

[HAL 02]

[2.1]

By arguing that the source of requirements of interest {RoI} is in reality {WoI}, this approach distinguishes the informal designation of a {SoI} from its formal definition by the nature of the properties of interest {PoI} contained in {RoI}. The “optative” properties explicitly express what the artifact of interest {A } must control, but they subordinate that under the implicit assertion that other “indicative” properties are controlled by the situation system {SS } itself or are contained in a non-direct manner by {A }. With regard, for example, to a system-centered resilience requirement, it is therefore necessary to make visible as soon as possible the phenomena of interest {PhoI} contingent upon measurability of the interaction properties {PoI}, as presented in section 2.2.3. This in-depth interdisciplinary knowledge {K } of a situation system complements the original knowledge {k } of a situation for incremental validation of the levels of system of systems-level maturity6. This overcomes the only conformity of a part and even of an assembly of parts, including based on formal techniques [ZAY 17], if the phenomenological source of the verification properties is not sufficiently well grounded. 6 In situ system validation {├ } of (Technology Readiness Level) # 5 for our experimentation, and contractual validations in silico { ⊩ } between domains of expertise.

System-centered Specification of Physico–physiological Interactions

43

The specification process of the problem frames approach was then explored by Jin [JIN 06] by revisiting the meaning of a requirements specification within the holonic paradigm framework. The interest of these works is to better distinguish the designation of a system from its definition, in particular by arguing that the system knowledge of the phenomena of interest, which are source and sink of interactions within the problem space, enables the derivation of a new specification relationship according to: k , R → S

[2.2]

Our cognitive interpretation of these works generalizes a set of coupling } between a requiring problem space and a responding solution space, predicates { → ← according to: – descriptive specification of problem-oriented requirements according to: K

, R

→ K

[2.3]

– verified specification of solution-oriented models according to: K

, R

→ M

[2.4]

– prescriptive specification of solution-oriented models according to: K

←M

,K

[2.5]

– validated specification of problem-oriented models according to: K

, M



⊩ R

[2.6]

The predicates [2.3] and [2.5] highlight a major difficulty of multidisciplinary knowledge interoperability for each source–sink interaction. The knowledge perceived by each requiring or responding source is indeed their only visible (known) representation of the broader potential knowledge of the corresponding targeted sinks. In order to ensure the individual cognitive dynamics to build by detour a partial multidisciplinary model in request or response to a specification, we limited in SysML language only the essentials of the interdisciplinary representation to be shared for system-centered orchestration. The orchestration process results from refinement iterations based on the twin-peaks model [HAL 02], throughout the life cycle of a system project between multidisciplinary knowledge, respectively interacting as the source of problem-oriented requirements specification (predicate [2.3]) and the sink of solution-oriented models specification (predicate [2.5]), each being verified in its

44

Automation Challenges of Socio-technical Systems

own domain (predicate [2.4]) before contractual validation (predicate [2.6]). We note that this “formal system” that found a specification requires the solution to be validated in fine in a real situation (predicate [2.1]) from which all the requirements are issued and traced. In section 2.4, we demonstrate the implementation of an environment of co-modeling co-simulation that enables this collaborative orchestration process for the system-centered validation of the model prescriptive specification of our case study. This type of environment aims to break away from the in silo approach of multidisciplinary systems engineering in order to orchestrate the dynamic exchange of knowledge and models by overcoming document sharing. It is all the more a structuring environment in system-centered engineering since digital technology facilitates individual cobbling of hardware–software-in-the-loop partial solutions that can complicate system architecting. 2.2.3. Specification of a situation system of interest We present some elements of multidisciplinary knowledge for architecting a situation system. In response to the system architect request, the situation system expert – co-opted in the project team as architect of the relative multidisciplinary domain – decodes} the targeted situation dynamics in the form of a diagram of {encodes → ← causal loops (Figure 2.7). He then refines the resulting model {M } in a collaborative manner with all appropriate multidisciplinary knowledge , , for specifying a verified executable model {M } of the targeted situation system in the form of a stock-flow diagram (Figure 2.8). Lastly, the essential elements of this model are translated into SysML language in the form of an activity diagram {M } (Figure 2.9) for the purposes of interdisciplinary orchestration of the system-of-interest validation. The situation of interest emulated by our experimentation platform (Figure 2.1) must perform an emergency function of water supply. This control situation is triggered by a local audible alarm to warn of a degraded operational situation in the water secondary circuit of steam generators (SGs). It must ensure one of the three safety functions of the critical industrial process under study, requiring {R } to maintain the fuel cooling under all circumstances by removing the residual energy, including after the reactor is shut down [BOU 16].

System-centered Specification of Physico–physiological Interactions

45

2.2.3.1. Multidisciplinary knowledge elements of a situation In a socio-technical context, we call “situation” all the moments during which interactions between humans and their environment (work, life, etc.) take place in the form of reciprocal actions (according to [ZAS 08]). These actions lead to a result subject to external requirements and conditions [GIR 01]. A situation is put into being and service by human intention as an entity, in terms of elements and their interactions [DEV 95] and it makes sense to its users [BOY 98, MIL 15, END 16]. Thus, in our study, field operators { }, trained to apply safety are de facto procedures, as well as operational engineering specialists k constituents of the targeted control situation (Figure 2.6, left) due to their phenomenological understanding, among other types of expertise, of the situation complexity [FRA 14]. Our situation of interest must, in order to be operational, commit in its totality humans as well as artifactual elements (alarm and technical installations of instrumentation) and physical elements such as the water flow that must be controlled. This commitment of resources in the entire targeted situation enables delimiting its boundaries (drawn in Figure 2.6, left), which designates it in its immediate surroundings and contextualizes it within its critical industrial process environment. Its operating procedures depend on safety aspects of the context [GIR 01] for which it has been designed. Its commissioning depends on Technical Specifications for Operation (TSO) [APP 98] which are suitable for but limit its actions within its operational environment [ZAS 08]. The problem frames approach and its extension to socio-technical systems demonstrated that the dynamics of a situation is made perceptible to its users through phenomena manifested by the changes of state of its elements. They afford their human, artifactual or physical nature, which de facto enables the designation of the constituents of our situation of interest. Moreover, our exploration of a logic of effects [DUC 96] and a states-based approach [APP 98] enables us to argue that the continuum of phenomena makes them the sources and sinks of all the interactions (Figure 2.6) of the targeted situation. So, for our study, we have defined as exogenous a phenomenon the origin of which is in the surroundings of the targeted situation and which is the source of interactions triggering internal actions of it or being the result of these actions, for example, the phenomenon of perception of an operator causing an action of cognition (Figure 2.7). We have defined as endogenous a phenomenon the origin of which is in the situation of interest and which is the source or the sink of interactions triggering internal actions of an element of the situation, for example, the phenomenon of computation of an artifact (Figure 2.7). In the rest of this section, we denote {PheE , PheE , PheE } and {PheI , PheI , PheI }, respectively, the exogenous and endogenous phenomena,

46

Automation Challenges of Socio-technical Systems

having a human (H), artifact (A) and physical (P) origin. According to Hall and Rapanotti [HAL 05], a phenomenon of interest is described as causal when it can be controlled or contained within the targeted situation. It is described as biddable } when it is the cause of a problem and that even when identified, it {PheE leads to non-determinism in a situation of interest, such as can be related to “human nature”. }, we Among these multidisciplinary understanding elements constituent of { designate {phenomena} as a unit of specification of a situation as required in its operational reality. Thus, made visible, it becomes a source of the measurability of properties {P } that are required to be expected of a situation. 2.2.3.2. Descriptive specification of the targeted control situation } elements are applied to designate the These multidisciplinary knowledge { targeted control situation in response to a requirements specification {R } requested by {k } to the system architect. Since these requirements are expressed in the operational reality of the situation, we adopt a systems thinker’s attitude (Figure 2.5, left) to model phenomena which are source and sink over time of interactions between its elements and with its surroundings. By applying the system dynamics approach [FOR 94], we designate, in the form of a causal loops model {M } (Figure 2.7), the causal implications of these phenomena, whether they are of physical, artifactual or human nature (respectively, red, blue or green). decoding} relation [ROS 12] of the targeted situation in This initial {encoding → ← collaboration with {k } enables us to model how phenomena can build an abnormal or normal operation of our situation of interest. The presence of a reinforcement causal loop (R) (Figure 2.7) in the model, made up of only links (S), reveals the amplification of some phenomena that lead to an abnormal situation: – the more a manifestation of a phenomenon {PheE } disturbing the phenomenon {PheE } of water circulation in the secondary circuit of the SG occurs, the more the phenomenon of emission {PheE } of an alarm sound occurs, the more the studied sensory interaction {I } is realized; – but because of the presence of a non-deterministic phenomenon }, the less the phenomenon of its perception {PheE } by an operator {PheE occurs, and therefore the less the interaction {I } is taken into account, and consequently the less the control situation of emergency water circulation is commissioned.

System-centered Specification of Physico–physiological Interactions

47

So, it appears that the non-deterministic phenomenon {PheE } is at the origin of the requirements specification {R }by {k }. The normal operation of our targeted control situation is designated in {M } by a balance causal loop (B) (Figure 2.7): – the more a manifestation of a phenomenon {PheE } disturbing the phenomenon {PheE } of water circulation in the secondary circuit of the SG occurs, the more the phenomenon of emission {PheE } of an alarm sound occurs; – the more the studied sensory interaction {I } is realized, the more the phenomenon of its perception {PheE } by an operator occurs and thus the more the control situation of emergency water circulation is commissioned; – the less the manifestation of a phenomenon of emission {PheE } of an alarm sound, the more the control situation of emergency water circulation is operational. The presence of links (O) and (S) in the causal loop means a certain mutual balance between the various phenomena and provides early indication of a certain operationality of the targeted control situation. In short, this initial descriptive specification {M } of the targeted control situation must from now on be refined in order to be executable for purposes of verification in our domain of multidisciplinary knowledge {K }.

Figure 2.7. Causal loop diagram of the targeted control situation. For a color version of this figure, see: www.iste.co.uk/vanderhaegen/automation.zip

48

Automation Challenges of Socio-technical Systems

2.2.3.3. Prescriptive specification of the targeted control situation system Our multidisciplinary knowledge {K } of the system dynamics approach leads us to examine the targeted control situation as a system (targeted control situation system) [LAW 10], in a collaborative manner with the relevant multidisciplinary , , . The resulting specification is verified by the execution knowledge of the model {M } before being transferred in SysML language {M } to the system architect. 2.2.3.3.1. Multidisciplinary specification The previous model {M } is refined in the form of a stock-flow diagram (Figure 2.8) of the targeted control situation system with the Stella7 tool, a modeling and simulation technology for the “system thinker”. Here, our attitude is to identify and represent the operational behaviors realized by the phenomena and the interactions occurring in a normal as well as an abnormal situation. {M } is refined according to rules of correspondence between the causal links (O or S) previously specified and triplets {flow-stocks-flow} [PON 11]. Thus, we model (Figure 2.8) the phenomena of interest as stocks (illustrated by rectangles), where their dynamics, meaning their manifestations or not in the situation, are represented by empty or full stocks. We model the interactions that take place in the situation system as flows (illustrated by directional pipes and valves) that represent their circulation between their source phenomena and their sink. Lastly, the causal implications of changes brought about by the interactions, respectively, exiting or entering their source or sink phenomena are modeled by connectors (illustrated by an arrow). Thus, the commissioning of the targeted control situation is the consequence of an accumulation in the stock of emission phenomenon {PheE }, triggering the flow of the interaction {I } with the result of an accumulation in the stock of the perception {PheE phenomenon {PheE } relative to the field operators. A contrario, the “non-accumulation” in the stock of the perception phenomenon {PheE } reveals a bad flow of the interaction {I } following an accumulation in the stock of biddable phenomenon {PheE }, the causes of which can be an inappropriate or unsuitable behavior of the emission phenomenon stock {PheE }. Simulation (Figure 2.8, right), in discrete time, by the execution of {M } in the } in a Stella tool, is realized by modulating the stocks {PheE } and {PheE binary manner. It allows verification of the optative and indicative properties of

7 www.iseesystems.com.

System-centered Specification of Physico–physiological Interactions

49

interest {P }, as are required by {k }. It highlights the importance of the quality of behavior of the emission phenomenon {PheE } to guarantee correct “flow” of the targeted sensory interaction {I } in order to allow the relevant field operators to perceive correctly and act correctly (see Figure 2.15). decoding} [ROS 12] of the targeted situation This second relation of {encoding → ← carried out in our domain of multidisciplinary knowledge {K } but in , , results in a prescriptive specification {M } of collaboration with the targeted situation system. It must from now on be expressed in a language that facilitates its translation to the system architect as a response to the requirements specification and to its validation by the domain of operational knowledge {k }.

Figure 2.8. Stock-flow diagram of the targeted control situation system. For a color version of this figure, see: www.iste.co.uk/vanderhaegen/automation.zip

2.2.3.3.2. Interdisciplinary specification In the current state of the modeling-simulation technology previously used, it is not possible to validate {M } by co-execution with all the models of the targeted system specification, as addressed in section 2.4.3.1. In order to avoid neglecting this stage of translation of the interdisciplinary knowledge gained in this prescriptive specification, we translate it in SysML language for system-oriented orchestration. The IBM® Rational® Rhapsody® tool enables us to describe the complete behavior

50

Automation Challenges of Socio-technical Systems

of the phenomena in the form of a SysML activities diagram (Figure 2.9) that specifies the main part of {M }. Thus, the phenomena of interest are modeled as SysML actors, to which PIN SysML actions are associated (illustrated with squares that frame an arrow) in order to designate them as a source or sink of interactions of interest. The interactions are represented by SysML activities to which their ActorPin source and sink phenomena are systematically associated. The dynamic of the targeted situation is represented in the form of a single diagram linking activities and actors leading to its normal and abnormal operation. All of the operational scenarios are generated in the form of SysML sequence diagrams in order to partially validate this prescriptive specification with the system architect. In summary, our integration in the system project led us to the prescriptive specification {M }, in terms of phenomena, interactions and causal implications, of the targeted control situation to satisfy the requirements specification {R } . By arguing that its source required by {k } according to K , M ⊩ R is in the operational reality of the situation, this specification reveals that it is: – explicitly the “optative” properties of the emission phenomenon {PheE } that must be verified as a source of controllability of the sensory interaction I targeted in this study; – under the assertion implicitly that the “indicative” properties of other phenomena are the source of controllability of contingent interactions by the targeted situation itself … but this is not necessarily true. Translated mainly into SysML language, this specification enables the system architect to orchestrate the refinement of the interdisciplinary knowledge by keeping the system completeness of this first architecture draft (triads of the conceptagon shown in Figure 2.8). The system architect must then look in greater depth at the source and sink of phenomena of the physico–physiological interaction of targeted sensory perception to then pass judgment on to aid decision-making about the feasibility of the interactivity that is targeted by the change of system-architecture technology (section 2.4.1.1). In other words, this specification is used as a reference situation system (as-is, Figures 2.7 and 2.8) to move towards implementation of this new architecture (to-be) in the real situation.

System-centered Specification of Physico–physiological Interactions

51

Figure 2.9. SysML activities diagram prescribing the targeted control situation system behavior

2.3. Physiology-centered specification of a sensory perception interaction In response to the request of the system architect, we present elements of multidisciplinary knowledge K that enable {encoding → decoding} from a ← physiology-centered point of view of the understanding of the situation system. Our interaction with the architect of the situation system leads us to prescribe a specification of physico–physiological requirements related to the measurable properties to be satisfied. The result of this specification is first a mathematical model, the essentials of which are translated into SysML language in the form of a requirements diagram (Figure 2.15), then an executable model presented in section 2.4 in order to be validated at a system level as a constituent of the interdisciplinary knowledge.

52

Automation Challenges of Socio-technical Systems

2.3.1. Multidisciplinary knowledge elements of a physico–physiological interaction To address the correct perception of a sound emitted by the alarm to a human operator, on the one hand, a mathematical theory of integrative physiology by means of its construction around the trio was chosen because it is built with the triplet {source, interaction, sink} [CHA 93]. On the other hand, we have relied on work related to perception/action [BER 12]. 2.3.1.1. Mathematical theory of integrative physiology The explanation of the propagation of this physical quantity through different biological structural8 units relies on the mathematical theory of integrative physiology (MTIP9) defined by Chauvet [CHA 93, CHA 95]. This theory, making of the physiology-centered up a brick of multidisciplinary knowledge K engineering solution space, is based on a mathematization that has the objective of building executable models in silico10, describing physiological processes as a combination of non-symmetric and non-local “functional interactions” between structural units located in a hierarchical space (Figure 2.10(a)).

Figure 2.10. Elements of multidisciplinary knowledge of the mathematical theory of integrative physiology

In order to facilitate understanding of the model building of sound sensing by a physiology-centered engineering solution space, we describe in the rest of this

8 Structural unit: physical structure, arrangement of molecules in physical space, for example a neuron located in the brain. 9 http://www.biologie-integrative.com/. 10 Usual expression in “ergonomics” in contrast to in vivo to qualify situations of human modeling.

System-centered Specification of Physico–physiological Interactions

53

paragraph the theoretical basis for the MTIP, essential for our work, that the “functional interaction” provides. The MTIP is a theoretical physiology approach that envisages the living organism in its entirety, meaning as part of a general integrated system. Its mathematization aims to define the integration of biological mechanisms in interaction in order to describe the operation of the human system based on its sub-systems, while referring to the laws of physics. Effectively, for Chauvet, even if “biology cannot be reduced to physics. […] The living organism, in spite of all its difference with non-living matter, is part of the physical universe and must naturally be subject to physical laws. It is therefore difficult to believe the biological world to be devoid of the unity that the wonderful harmony of the laws of the physical universe entices us to seek—the elusive unifying theory” [CHA 04, p. 11 and 15]. In addition, for him, the organism, that is, all the physiological processes that take place between the various biological structures, is a continuous and finite hierarchical system of structures as well as a combination of functional interactions between these structures. A functional interaction is considered to be the elementary atom of a physiological process. It is defined as “the action of a biological structure on another” by the intermediary of a physical entity (which may be some ions, molecules, photons or more generally a certain physical quantity) that thus enables data to be propagated from a structural unit that is a source11 towards a structural unit that is a sink12 (Figure 2.10(a)). This interaction has several specific properties [CHA 93]: – non-symmetry: the functional interaction acts from a structural unit “source” towards a structural unit “sink”. It represents a unidirectional action; thus, at the same level of organization, the signal will not retract from the sink to the source; – event causality: the cause–effect relationship is due to the existence of an event; since this event existed as a cause in a previous instant (t), the effect is going to exist at an instant (t’). This is a non-local, or even remote, effect; – non-instantaneity: the speed of transport of an elementary function is finite; – non-locality: an elementary function acts remotely and creates couplings between structures that are far apart. The exchanged product is transported from one

11 Source: structural unit that causes functional interaction. 12 Sink: structural unit(s) that receives the effect (ion or molecules) that are emitted by the source.

54

Automation Challenges of Socio-technical Systems

place to another non-neighboring one by propagating across structural discontinuities13. With this in mind, a functional interaction expresses a mechanism of passing a product between at least two structural units, for example, between the auditory cortex and the cognitive cortex. This passing mechanism depends both on time and space. It can be mathematically represented in the theoretical framework of the MTIP by a differential equation known as a “field equation” that implies non-local field operators called S-Propagator [CHA 02]. This operator enables, if necessary, the exploration of the lower levels of the source and of the sink (Figure 2.10(b) and (c)) and which are involved in the propagation of the functional interaction between themselves, as much in the emission and reception phases, by introducing the density of connectivity according to a spatial distribution. It should be noted, however, that the building of these executable models in the MTIP formalism requires a large amount of data relating to human anatomy, physiology and biochemistry which are not currently all available, sometimes due to a lack of technical possibility in experimentation. However, our previous work has shown that the scale factor required for the implementation of the MTIP in numerical modeling with the dedicated tool PhysioMatica™ was not necessarily possible with regard to the available physiological data (e.g. cellular or synaptic density). On the other hand, contextualization with regard to a targeted situation system enables other more available physiological data to be taken into account in a more macroscopic manner (e.g. threshold of sound perception). In this sense, we interpret the physico–physiological interaction of sound sensing as a physical interaction, meaning that a physical flow is propagated from a technical element “source” towards a physiological element “sink” as long as the crossed medium does not require a transmutation in biological flows. 2.3.1.2. Functional representation of a sensory interaction of perception/action As Berthoz points out, “the Amygdala, which is an extraordinarily complex centre (Figure 2.11) [...] immediately attributes a value (danger, pleasant, positive, negative, etc.) to what is observed” while giving a reminder that “in the prefrontal cortex [...] there are two areas, very close to each other, that are respectively 13 Structural unit that is the site of a functional process that is different from that existing in structural units at a higher level. Structural discontinuities are at the origin of the existence of the hierarchical organization.

System-centered Specification of Physico–physiological Interactions

55

implicated, one in analysis and decisions using information that arrives from the exterior, and the other, very close by, which is implicated in the analysis of information that come from inside the brain, the memory, the internal information” [BER 12, pp. 116–117]. It is still necessary, in certain situations, to guarantee the correct reception of information by the sensor or sensors of the sensory system involved (information that is in fact given in a perceptible form before possibly becoming intelligible), in the form of a certain physical quantity required (light, sound, etc.) and lastly as having the correct data available and stored in the memory, also from a human point of view.

Figure 2.11. Schematic representation of circuits of different sensory perceptions [ROL 06]

Thus, initially to explain this “physiology of perception and action” [BER 12] in a didactic way, we chose the thyristor model as a functional analogy to translate a first gate related to sensory perception and a second gate related to the stimulation of a certain stored and potentially available knowledge [DUP 12] (Figure 2.12(a)).

56

Automation Challenges of Socio-technical Systems

And by combining with certain components of the MTIP, we have introduced a necessary but not sufficient condition into this mechanism, relating to the physical quantity required to ensure the right perception and relating to the interaction between the source and the sink.

Figure 2.12. Analogy of the behavior of the functional interaction with a thyristor model

In addition, the MTIP framework describes the physiological mechanisms as a spatio-temporal arrangement of processes based on physiological laws. Thus, the behavior of the living being is explained as a set of functional interactions propagating from one structural unit to another, which become the source or sink in turn, through a tangible continuum of physical nature. And the thyristor model transposed to a structural unit (Figure 2.12(b)) enables to understand this sink/source change of state. Effectively, a structural unit that is potentially a “sink” is likely to be able to receive a certain required physical quantity (necessary condition) coming from a structural unit “source”. This condition is far from sufficient, but this structural unit must have a potential “source” stimulated by means of an intermediate mechanism triggered by this initial physical quantity. Once these conditions are met, the “sink” becomes a “source” by propagating a certain physical quantity (similar or not) towards one or more other sinks, or itself (e.g. such as autocrine cellular communication). According to the complexified nature of the structural unit and of this “sink becomes source” mechanism, the one or more involved S-Propagators need to be explicit. In addition, this analogy has been repeated in the interdisciplinary orchestration (Figure 2.6), in which each engineering asset potentially possesses a certain multidisciplinary knowledge particular to its specific domain (e.g. physiology). Its stimulation requires not only a semantic interoperability in SysML language, for example, but also a physical one, for example, a verbal exchange in order to better perceive to better understand. It should be noted that this analogy reflects the intrinsic nonlinear behavior behind the reasoning for “thinking and acting as a system”.

System-centered Specification of Physico–physiological Interactions

57

2.3.2. Prescriptive specification of the targeted interaction of auditory perception We have instantiated the elements of multidisciplinary knowledge K that are described in this situation in order to specify in SysML language some Physico–physiological requirements M of interaction of auditory perception I in response to a situation system-centered specification of requirements R according to the predicate K , M ⊩ R . 2.3.2.1. Multidisciplinary physico–physiological knowledge elements With regard to the MTIP, the functional interaction I of auditory perception in humans is carried by a sound wave which is a physical quantity related to the propagation of mechanical vibrations [MÜL 12]. By instantiation of Figure 2.10(b) and (c) to the particular situation of this auditory perception, we introduce the binaural dimension of human hearing, which manifests itself by the presence of two external ears and makes it possible to determine the spatial origin of the sound source (Figure 2.13(a)).

Figure 2.13. Elements of understanding of the operational interaction of targeted auditory perception. For a color version of this figure, see: www.iste.co.uk/vanderhaegen/automation.zip

58

Automation Challenges of Socio-technical Systems

For the representation of the auditory perception interaction I targeted by the situation system, we consider by analogy with the thyristor model a sensory potential “sink” and a potential “source” of perception. In the following, we focus on one of the two hearing organs at the cellular level, without having to go down into the biological organization to prescribe the first physico–physiological requirements (Figure 2.13(b)). Sound waves reaching the ear through the external auditory canal cause the eardrum to vibrate. These vibrations produce waves of very low amplitude in the internal ear which stimulate cilium cells located at the surface of the basilar membrane. These cilium cells are primary hearing receptors (analogous to the photoreceptors in the eye). The oscillations of the basilar membrane cause action potentials to be emitted by these cells, which activate the fibers of the auditory nerve that innervate the cochlear nucleus of the brain stem. The ascending fibers reach the auditory cortex after relaying with the inferior colliculus and in the medial geniculate body of the thalamus (Figure 2.14(a)).

Figure 2.14. Elements of understanding that center the interaction of sound sensing in the human auditory domain [BEA 96, GAZ 00]. For a color version of this figure, see: www.iste.co.uk/vanderhaegen/automation.zip

Figure 2.14(b) represents the delimitation of the area of good human14 auditory perception as a function of the sound frequency belonging to a spectral range 14 http://sound.westhost.com/articles/fadb.htm.

System-centered Specification of Physico–physiological Interactions

59

between 16 Hz (infrasound) and 20 kHz (ultrasound) and its sound intensity { expressed in decibels (dB) [GOL 09].

}

2.3.2.2. Interdisciplinary specification of physico–physiological requirements From this multidisciplinary knowledge (Figure 2.14(b)), the physiology-centered engineering solution space derives a new measurable requirement R . } of the by the sound intensity { characterizing the targeted interaction I signal expressed in decibels according to the inequalities shown in Figure 2.15. In this case, HearingRangesMin(w) and HearingRangesMax(w) are two mathematical functions returning, respectively, the minimum and maximum values of the sound intensity defined by the human auditory domain as a function of the sound signal frequency { }, assuming, and due to reasons of reasonable simplification, that the latter is a pure sound, then characterized by a single frequency, here written { } [MÜL 12].

Figure 2.15. Diagram of physiological requirements contextualizing the interaction of targeted auditory perception

In addition, using this requirement R . , the physiology-centered engineering solution space derives a new requirement R (Figure 2.15) that . specifies, depending on the signal frequency { }, the minimum duration of sound signal to be emitted in order to guarantee “right” perception by a human “sink”. Finally, in order to facilitate the “right” auditory sensing of the sound alarm, the physiology-centered engineer further recommends that the binaural sub-system (the

60

Automation Challenges of Socio-technical Systems

two ears, Figure 2.13(a)) of the operator should be aligned with the sound source, that is anthropometric in nature thus introducing a new requirement R . (Figure 2.15). This “source/sink” constituent contextualizes the interaction of auditory sensing of a pure sound (and more generally of a complex sound signal that results from the superposition of pure sounds) by mathematically defining M the dependence of the sound intensity on the sound pressure, according to Müller and Möser [MÜL 12]: ⁄

= 10 =

,

∆ ∆

=



,

= 20



[2.7] [2.8]

[2.9]

with the parameters {Prms = effective pressure, P = sound pressure that depends on the coordinates in space {r} and in time {t}, E = sound power of the sound source, c = speed of sound, ρ0 = density of air, r = source − sink distance}. It should be noted that the parameters {c} and {ρ0} depend on the temperature, the humidity and the atmospheric pressure that are evaluated in the medium of sound propagation. In addition, equation [2.9] demonstrates that the sound power diminishes as a function of the distance from the source. In this sense, the sound interaction appears strongly contextualized by depending on parameters related to the medium of sound propagation. This is even more true in an industrial environment (such as that of control of an electrical power plant) where several sound sources can be superimposed. Referring to the standards in effect such as NF S32-001 (1975), the solution space of physiology-centered engineering derives a } must specifying that the intensity of a sound alarm { new requirement R }. be at least 10 dB higher than the ambient sound level { Taking into account the two previous specifications, this component “source” must satisfy, among other things, the requirement R . , relating to the quantity of pressure wave required to detect a sound; the physiology-centered engineering that constrains the technical object solution space derives a new requirement R “source” with regard to the constraints of the operational environment, according to the law defined by Müller and Möser [MÜL 12].

System-centered Specification of Physico–physiological Interactions

61

All these physiology-centered engineering requirements define a first model of the physico–physiological interaction of sound sensing of an alarm and can thus be prescribed to the problem space of systems engineering in the form of a diagram for validation. However, this first descriptive level of modeling is not sufficient for our heuristics of system co-specification which requires prescribing a constitutive model of an overall execution with other models. 2.4. System-centered specification of an interaction of sensory perception The prescriptive specification {M } of the targeted control situation system enhances the system architect’s knowledge to orchestrate the architecting specification of a respondent system of interest {S } towards the targeted technological evolution (Figure 2.16, right). {M } translates de facto the response that {S } must provide to control the phenomenological causes–effects of interactions between the constituent objects of interest of the bounded situation system. In other words, this specification prescribes the {S } behavior that the implemented artifact {A } will have to provide to meet the required evolution (as-is/to-be). Although this early designation is a draft, the system architect has a better knowledge of what must be aimed for architecting {S } within its bounded surroundings and to check quality rules [FAN 12], especially properties measurability for verification purpose, on critical requirements and constraints from multiple stakeholders. This enhanced knowledge enables the system architect to orchestrate the interdisciplinary feasibility which takes shape in our case study of an assembly of { , , , } } that is validated in silico to satisfy executable multidisciplinary models { {M } (Figure 2.1, yellow bus) and then in situ by networking with our experimentation platform control situation (Figure 2.1, gray bus). We point out the particular role of the orchestration model {M } that expresses the intention of the system architect [RET 15] to satisfy the specification {M } of the control situation system {M } with regard to the profession – while taking into account the engineering and operation specifications {

{ , , }

}.

2.4.1. System-centered architecting specification of the targeted auditory interaction Figure 2.16 illustrates this architecting intention by applying an automation pattern (on the right) on the targeted system architecture (on the left) that aims for a change of socio-technical paradigm (Figure 2.2). The studied sensing specification

62

Automation Challenges of Socio-technical Systems

{M } of a “human-centered intelligent measurement” results from the allocation of the previous mathematical model {M } to a logical architecture (Figure 2.3) at a step of the refinement process to assess architecting alternatives. We note that this coupling must keep the system togetherness throughout successive system-architecting refinements (illustration by suitable triads of the conceptagon).

Figure 2.16. Situation system-centered architecting specification of an interactive-aided control system. For a color version of this figure, see: www.iste.co.uk/vanderhaegen/automation.zip

2.4.1.1. Multidisciplinary knowledge elements in system architecting A major system requirement in experience feedback from the increasing digitization of control rooms involves the improvement of the “active” monitoring of the targeted process. It becomes necessary to aid field operators further to easily and rapidly update, anywhere, anytime, a system representation of the whole process, that is to say, to “perceive right” in order to “understand right” for “acting right”. With this in mind (Figure 2.16, left), the “man–machine interaction” overcomes the classic framework of “interface” in order to facilitate multiple agentsʼ “interactivity” not only with analogical and digital hybrid constituents, but also with other tangible elements such as the flowing “matter-energy”. This architecting paradigm change for distributed automation was initiated by an important process industry-led European R&D program in the late 1980s. From the technological opportunity to unify a continuum of data communication around a real-time field bus, these works have demonstrated the interest of distributing a form of “technical intelligence” as close as possible to the physical process in order to filter sensing and actuating through instrumentation feedback. The resulting architecting pattern is an integrated control, maintenance and technical management system (CMMS) allocating intelligent actuation and measurement functions (IAMS) to distributed intelligent actuators and sensors

System-centered Specification of Physico–physiological Interactions

63

[PÉT 98]. In pursuing this work, advantage has been taken of digital technology by augmenting the instrumentation “hardware–software” interoperability towards a certain ambient “human–artifact” interactivity in order to filter actions and observations to/from the physical process. A service-information bus that encapsulates the matter-energy flow through a data-driven channel thus aims to increase the control capacities of field operators by mirroring the whole process behavior in the best way possible. This interactivity surpasses the interface {I } by more broadly contextualizing surrounding reality in order to better perform local decision-making. The control architecting pattern we targeted aims to designate the endogenous and exogenous phenomenological source of interactions that the respondent system must control or contain. 2.4.1.2. Control-centered architecting specification We note that control is one – but no more than one – element of the {communication command control} (Figure 2.17, right) triad of the conceptagon that distinguishes what is required externally from what is put in place to act internally. This is why an intermediary refinement first leads to a control logical architecture over the power-oriented process (Figure 2.17, left) without specification of the physico–artifactual transmutation – the instrumentation – between the physical process and the logical control that reflects the essentials of the internal circular causality {I , I }. The process logical model {M } is a multiphysical representation of the flow of energy-matter (water is reducible) in order to specify not only the endogenous phenomena to control but also the exogenous ones that must be contained. This specification in Modelica language can be executed with the Dymola® tool from an event-driven system orchestration in SysML language. The control logical model {M } is a computational representation of an intelligent proportional, integral and derivative corrector (i-PID) coupling an ultra-local process model with a power command PID model. Our interest in this type of command in comparison with a classic command is to adjust, during operation, the difference between the process under control and its ultra-local model in compliance with the monitoring requirements of our architecting CMMS-IAMS pattern. Another interest is also for a functional specification that the implementation of this technique does not require organic (ontological) identification a priori. This prescriptive specification in Simulink® language is executable with the Matlab® tool from an event-driven system orchestration in SysML language, although certain computational blocks must be confidential15 “black-boxes”, as is usual in industrial practice. The result of this prescriptive control artifact specification {M } of the required system of interest 15 ALIEN: algebra for digital identification and estimation: http://alien-sas.com/csm-et-mfc/.

64

{R R

Automation Challenges of Socio-technical Systems

} is a set of executable multi-models that temporarily satisfy: K .

,M





Figure 2.17. Control-centered architecting specification refinements. For a color version of this figure, see: www.iste.co.uk/vanderhaegen/automation.zip

The system orchestration whole process developed by Bouffaron [BOU 16] leads by successive refinements to the specification, among many, of an executable model of “intelligent measurement”. The implementation pattern (Figure 2.17, M right) is a constituent of a control system architecting pattern that is composed of other constituents of observation, decision and action. Each of these constituents has, in a recursive manner, a “cognitive” or computational model of representation of its surroundings with regard to the required mission, for example, “to observe”. This is a holonic interpretation [BOA 09a] that aims to specify in a recursive manner that each {holon} performs and reports on the requested mission while keeping its internal structure operational. These bio-inspired principles of system-architecting and the associated artificial rules of modeling are applied, for example, in the HMS (holonic manufacturing system) architecting paradigm to design for the unexpected [VAL 17], where it is argued to be necessary to update an executable model that reflects “on the fly” the surrounding and changing world of interest {WoI}. Each of the models has the purpose of being interoperable to facilitate its system coupling on a simulation bus, where the overall harmony (togetherness) is ensured by the SysML orchestration model. We give more precise details in section 2.4.3 of the system validation of the targeted measurement executable model.

System-centered Specification of Physico–physiological Interactions

65

2.4.2. Sensing-centered specification of the targeted auditory interaction The allocation of the mathematical specification M guided by the intelligent measurement pattern is a specification M in the form of a Matlab® Simulink® diagram enabling verification by model execution. It should be noted that this alternative “human in the loop”-centered measurement must then be validated in silico in the form of scenarios with the system architect before system validation in situ by operational engineering to ensure the targeted security function. 2.4.2.1. Model-based prescriptive specification The diagram of this executable specification consists of three blocks, each satisfying the architecting requirements of a logical partition {source/interaction/sink} of the functional interaction {I }. The technical block that is a source of sound emission is characterized by the }. The sound emitted by the position (X_Source, Y_Source) of the alarm { technical source is characterized by its frequency (Source_Frequency_Value) and its power (Power_Value). The switch block enables the activation or deactivation of the sound emission according to the value of the variable “AlarmReq” (Figure 2.18(a)). The physiological block “sink” is characterized by the position (X_Sink_Value, Y_Sink_Value) of the human as well as by a sub-function “fncPerception” whose objective is to calculate the correct sensing (or not) of the sound alarm (Boolean perception), taking into account the frequency (Sound_ Frequency) and the power (Physiological_ Power) of the sound source, as well as the background noise (Background_ Noise) (Figure 2.18(c)). In a more detailed way, this function by comparing “fncPerception” concretizes the physiological requirement R . the result of the calculation of sound intensity received in the auditory canal with the upper limit (HearingRangeMax(w)) and the lower limit (HearingRangeMin(w)) of the human hearing range (Figure 2.18(d)). The interaction block “source/sink” (Figure 2.18(b)) formalizes the propagation } through the contextualized medium of the targeted of the sound wave { situation. In this sense, it takes into account the speed (or celerity {c}) of sound varying as a function of the temperature and of the density of the propagation medium (rho), as well as the distance between the technical element “source” (X_Source, Y_Source) and human element “sink” (X_Sink, Y_Sink) to evaluate the amplitude of the source signal at the level of the sink. We particularly understand its importance for an in situ validation.

66

Automation Challenges of Socio-technical Systems

Figure 2.18. Blocks of the Simulink diagram centering the interaction of sound sensing in the human auditory range

2.4.2.2. Executable model-based verification The resulting diagram enables the feasibility of a specification of to be verified by model human-centered intelligent measurement M execution according to different test scenarios. The first test scenario consists of verifying that the sound power carried by the interaction and received by the human block “sink” decreases as a function of the distance with the technical block “source”. This verification shows that, for a constant power and emission frequency, the sound power received by the block “sink” decreases as a function of the distance between the two blocks (Figure 2.19(a)). It should also be noted that from a certain threshold (sound power < 11 W), the sound alarm is no longer detected by the human block “sink”.

System-centered Specification of Physico–physiological Interactions

67

Figure 2.19. Scenarios of testing “human-centered intelligent measurement”. For a color version of this figure, see: www.iste.co.uk/vanderhaegen/automation.zip

The second test scenario consists of verifying that the sensing of the sound signal by a human block depends on the human auditory domain. By observing the execution trace (Figure 2.19(b)), we note that for a sound signal frequency higher than 20 kHz, alarm sensing is no longer guaranteed. This corresponds to the expected results since the signal is no longer included in the human auditory domain. The last test scenario that we present consists of verifying that background noise is taken into account in the interaction model of sound sensing. We observe in the execution trace (Figure 2.19(c)) that for a background noise more than 10 dB higher than the noise emitted by the sound source, the sensing of the alarm by a human block is no longer guaranteed. In this sense, the property of attenuation of the noise with respect to the background noise is taken into account for the constituent interaction model. We then present the system validation in silico and then in situ of this verified , orchestrated in SysML language. specification M 2.4.3. System-centered sensing specification of the targeted auditory interaction The system validation results from an in silico and in situ orchestration process of multidisciplinary executable models (model-in-the-loop) of the targeted control

68

Automation Challenges of Socio-technical Systems

situation system (system-in-the-loop) which is supported by collaborative working technologies, for which we will give the essential knowledge elements in an initial step. We then illustrate the implementation of one of these technologies for system validation of the executable specification model of our case study, inferring a collaborative continuum of modeling between the usual domains of technical-centered or human-centered engineering to really contribute to system-centered engineering. This technological choice takes into account our working hypothesis that favors the execution of the specialist (multidisciplinary) models within their respective modeling environments. Beyond a language of system interoperability such as SysML so that the specialist engineers can exchange with each other, a hypothesis of this kind requires a system-centered model SysML M , enabling a whole “orchestral score”. It is not a case of building a whole model that encapsulates all the specialist knowledge models but translating only the information required to statically architect their interfaces (internal block diagram) and dynamically allocate the system functions to specialist models (activity diagrams, such as in Figure 2.9, to initiate the essential orchestral score “under writing” to be executed). 2.4.3.1. Architecting knowledge elements in executable model-based specification This cognitive work of multidisciplinary knowledge orchestration is now performed in a simplified way thanks to the increasing maturity of new digital modeling and simulation technologies that aim to ensure a continuum of information between the various engineering domains through integrative platforms. The latter intervene both for the implementation of the system specification process (co-engineering) and for the validation of the system specification by execution of models (co-execution). Collaborative engineering platforms now offer various levels of system integration, whose choice depends on a trade-off that takes into account engineering, industrialization, costs, training, automatization, etc. A first level of integration is ensured by collaborative platforms that enable the multidisciplinary knowledge to be structured in a shared directory and engineering knowledge in specific directories. This collaboration is also supported by different services such as messaging, file sharing and scheduling provided by these various platforms16. A second level of integration is ensured by PLM (product life cycle management) platforms which manage and share all the engineering and operational 16 http://www-01.ibm.com/software/fr/lotus/; http://products.office.com/en-us/sharepoint/collaboration; https://www.eroom.net/.

System-centered Specification of Physico–physiological Interactions

69

data of a system throughout its life cycle. These platforms17 provide capabilities of requirements and technical data management, visualization of digital mockups, configuration and change management, manufacturing processes management, quality and project management, etc. In this context, we note the definition of a standard for the multidisciplinary integration between various engineering tools: OSLC (open services for lifecycle collaboration) [OAS 10] for sharing engineering data. Among these two solutions, we have retained the collaborative workspace implemented with the tool Quickplace integrated into the IBM Collaborations Solutions. Indeed, the implementation of PLM platforms requires formalizing processes and engineering data exchanged at the interfaces of specialist engineering, whereas the deployment of the chosen solution requires little investment and configuration, thus facilitating rapid prototyping of the implementation of our system orchestration process between multidisciplinary knowledge domains, themselves shared between a “solution space” and a “problem space” [MOR 14]. This collaborative engineering implies the implementation of an architecting approach for verification and validation at system level, in relation to the various engineering domains required for model analysis. Thus, in the context of our case study of physico–physiological interaction of auditory perception, we have focused on the execution of models to reach this objective of interoperation, as recommended by Boy and Narkevicius [BOY 14]. Among the techniques of model execution, we have focused initially on full integration, which consists of capturing all the specialist models within the same tool, to execute the global behavior of the system [LIE 13]. Although this technique facilitates the integration of specialist models between themselves, it also limits the choice of tools, languages and engineering methods, etc., which is contrary to our working hypothesis. We then focused on the “co-simulation” that consists of executing all the specialist models in each of their dedicated tools. These models interoperate with each other around a co-simulation bus in charge of orchestrating the exchange of data between the latter. In this context, the standard FMI (functional mockup interface) opens up perspectives from an industrial point of view for the exchange and co-simulation of models between different engineering tools [BER 14]. At the beginning of our work, the definition of standard FMI was still in the early stage; in this sense, we focus our attention on an owner solution that enables 17 http://www.3ds.com/products-services/enovia/; https://jazz.net/; http://www.plm.automation. siemens.com/fr_fr/.

70

Automation Challenges of Socio-technical Systems

our system modeling tool (IBM® Rational® Rhapsody®) implementing the SysML language to be interconnected with other engineering tools. Our choice then turned to the co-simulation bus (CosiMate®)18 that also has the advantage of allowing us to develop our own coupling modules. We have thus been able to design a module between the co-simulation bus and the OPC (OLE for process control) server of our experimentation platform, for validation (system-in-the-loop) of the artifact-part of control-command of our case study in an operational situation. 2.4.3.2. Executable model-based validation The model of human-centered intelligent measurement is orchestrated in silico (Figure 2.20) in a first step by the system architect throughout the co-simulation bus for partial system validation.

Figure 2.20. Structural model of orchestration of the specification of human-centered intelligent measurement

A model of system orchestration M interfacing blocks with control models M 18 http://site.cosimate.com/.

completes the SysML translation of of the process {M } in order to define

System-centered Specification of Physico–physiological Interactions

71

a structural model of the control artifact { M  M {M } → M } [PÉT 06] (Figure 2.17, left). The control specification is partially validated – in an open loop with regard to the situation system – by the SysML translation of the Bloc M that enables control procedures to be executed in compliance with the specification of  M ⊩ R . We note that the required system according to: K , M this orchestration model also constitutes a specification for the configuration of the system co-simulation environment. This architecting process is iterative and by nature leads us to refine the models to converge towards a common and consistent definition of the flows exchanged between the various specialist models. Thus, we have defined new interfaces for the operational model of the situation: {Background_Noise} that defines the background noise that is linked to the operation of the experimentation platform, {Medium_Density} that characterizes the volumetric mass of air and consequently the speed of propagation {Sound_Celerity} in this environment. Other interfaces enable the definition of the order {AlarmReq} for the commissioning of the technical sound agent (control model) or to emulate in the process model a physical data for the presence or not of a leak {Water_Presence} that describes an incidental situation of control.

Figure 2.21. Trace of execution of the scenarios of system validation in silico of the executable specification of the targeted auditory interaction. For a color version of this figure, see: www.iste.co.uk/vanderhaegen/automation.zip

72

Automation Challenges of Socio-technical Systems

This system validation in silico then enables testing some technical alternatives for an alarm according to the emulated control situation system K , M ⊩ R . Let us consider two scenarios (Figure 2.21) that have the objective of validating the power of the technical alarm for a given frequency (20 Hz) so that the human agent can detect the auditory alarm “correctly”. In scenario 1, we can observe that in the specified control system situation, the power of the auditory alarm is not sufficient to be perceived by the human agent (perception). In scenario 2, the power of the alarm is increased by 0.5 W, which enables the human agent to perceive the auditory signal. The choice of alarm of the second scenario is finally validated in situ in the }. For operational control situation emulated by our experimentation platform { example, the sound of water supply pumps or that of instrumentation valves confirms in reality the validation of the precondition of auditory perception according . to K , CISPI ├ R However, this exploratory work, complementary to that of Lieber [LIE 13], raises the question of the system validation of the model in silico with regard to all the human factors encountered in a given system situation and not taken into account in the executable specification. Other works have demonstrated that our environment of co-simulation could support an iterative specification approach that includes plans of experiences in situ in a complementary or alternative manner to modeling the targeted sensory interaction in silico. This has been made possible by the development of an “auditory interaction” model immersed in the control system situation of our experimentation platform process and orchestrated via the co-simulation bus by specialist models of the control artifact. The execution of experience plans is then done in an operational environment that comes as close as possible to reality and on which we can vary various factors (sound power and frequency, atmospheric noise, age of individuals, position in space, etc.) in order to evaluate their influences on the correct sensing of the sound alarm. However, we note that the model in silico enables us to target, in addition to techniques of fractional experimental designs [TAG 87], certain relevant system interaction parameters in order to implement the right strategy to minimize the number of tests carried out and simultaneously maximize the number of factors to study. 2.5. Conclusion This exploratory work has demonstrated at a scale of plausible reality the interdisciplinary specification in silico and in situ of a multidisciplinary model of “intelligent human-centered measurement” as a constituent of an executable

System-centered Specification of Physico–physiological Interactions

73

system-centered specification that satisfies a control critical situation. We have insisted on the system-centered phenomenological designation of this situation [MAY 18] as the early draft of a design of a system architecture by interdisciplinary orchestration of the coupling to the reality of the multidisciplinary knowledge that are usually partitioned into technical- and human-centered domains. We note that the targeted system architecture is a constituent – to a certain extent hidden – of the broader specified situation system so that we argue that this phenomenological approach could be an alternative way to system-centered architecting – the system-concept step – but that must be yet proved by benchmarking at the practical scale. We also argue that other ongoing works in pursuit with the physico– physiological analogy with a “thyristor pattern” as tangibility prerequisite to perception (sensing) can contribute to system togetherness and interdisciplinary harmony between natural, human and technical domains, such as in the case for the cyberphysical paradigm. More generally, the process of orchestration based on collaborative work technologies opens up perspectives of simplex organization of a project system that partially responds to the digitalization requirement expressing the “Twin concept” of the new Industry 4.0 era. Educational feedback already demonstrates its complementary interest in the deployment of collaborative learning technologies [DIL 11], even in blended learning by doing (flipped pedagogy) and by first perceiving a reality. We also note that this system-centered orchestration of interdisciplinary knowledge requires new skills as support for the traditional systems decoding} a complex engineering corpus and how this is taught. {Encoding → ← situation of interest requires the adoption of a systems thinker’s attitude in order to be able to look outward as well as inward, for example as a supplement to analytical diagnostic techniques [LEM 13]. Virtualizing an interdisciplinary co-specification based on executable models requires an architecting modeler’s service to master collaborative simulation technologies. Nevertheless, the resulting in silico product does not provide exemption from a tangible representation of the acquired interdisciplinary knowledge for translation to design, nor of a tangible validation in situ that complies with a TRL scale for mature results integration. Lastly, we believe that orchestration of the essential content of the building of interdisciplinary knowledge remains to be explored to ensure the tangible harmony (togetherness) of all the multidisciplinary knowledge assets of a project system in order to minimize the individual cognitive overload, for example, to meet the industrial symbiosis of Zhang [ZHA 15].

74

Automation Challenges of Socio-technical Systems

2.6. References [ALL 16] ALLEGRO-DANIEL B., SMITH G.R., Exploring the Branches of the Systems Landscape, Les Éditions Allegro-Daniel Brigitte, 2016. [APP 98] APPELL B., CHAMBON Y., “Procédures de conduite et interface homme-machine”, Techniques de l’ingénieur Génie nucléaire, no. BN3421, pp. 1–12, 1998. [BEA 96] BEAR M., PARADISO M., CONNORS B., Neuroscience: Exploring the Brain, Williams & Wilkins, Baltimore, 1996. [BER 68] BERTALANFFY L., General System Theory: Foundations, Development, Applications, Braziller, New York, 1968. [BER 06] BERTHOZ A., PETIT J.-L., Phénoménologie et physiologie de l’action, Odile Jacob, Paris, 2006. [BER 09] BERTHOZ A., La simplexité, Odile Jacob, Paris, 2009. [BER 12] BERTHOZ A., “Bases neurales de la décision. Une approche de neurosciences cognitives”, Annales Médico-psychologiques, vol. 170, no. 2, pp. 115–119, 2012. [BER 14] BERTSCH C., AHELE E., SCHULMEISTER U., “The functional mockup interface-seen from an industrial perspective”, Proceedings of the 10th International Modelica Conference, no. 96, pp. 27–33, 2014. [BKC 17] BKCASE EDITORIAL BOARD, Guide to the Systems Engineering Body of Knowledge, 2017, available at: www.sebokwiki.org. [BOA 08] BOARDMAN J., SAUSER B., Systems thinking: Coping with 21st Century Problems, CRC Press, Boca Raton, 2008. [BOA 09a] BOARDMAN J., SAUSER B., VERMA D., “In Search of Systems “DNA””, Journal of Computers, vol. 4, no. 10, pp. 1043–1052, 2009. [BOA 09b] BOARDMAN J., SAUSER B., JOHN L. et al., “The conceptagon: a framework for systems thinking and systems practice”, Proceedings of the IEEE International Conference on Systems, Man and Cybernetics, pp. 3299–3304, 2009. [BOA 13] BOARDMAN J., SAUSER B., Systemic Thinking: Building Maps for Worlds of Systems, Wiley, Hoboken, 2013. [BOU 16] BOUFFARON F., Co-spécification système exécutable basée sur des Modèles. Application à la conduite interactive d’un procédé industriel critique, PhD thesis, University of Lorraine, 2016. [BOY 11] BOY G.A., The Handbook of Human-machine Interaction: A Human-centered Design Approach, Ashgate Publishing, Farnham, 2011. [BOY 14] BOY G.A., NARKEVICIUS J.M., “Unifying human centered design and systems engineering for human systems integration”, in AIGUIER M., BOULANGER F., KROB D. et al. (eds), Complex Systems Design & Management, Springer, New York, pp. 151–162, 2014.

System-centered Specification of Physico–physiological Interactions

75

[BOY 15] BOY G.A., “On the complexity of situation awareness”, IEA Congress, Melbourne, Australia, 2015. [CHA 93] CHAUVET G., “Hierarchical functional organization of formal biological systems: a dynamical approach. III. The concept of non-locality leads to a field theory describing the dynamics at each level of organization of the (D-FBS) sub-system”, Philosophical Transactions of the Royal Society of London B: Biological Sciences, vol. 339, no. 1290, pp. 463–481, 1993. [CHA 95] CHAUVET G., La vie dans la matière: le rôle de l’espace en biologie, Flammarion, Paris, 1995. [CHA 02] CHAUVET G., “On the mathematical integration of the nervous tissue based on the S-propagator formalism I: Theory”, Journal of Integrative Neuroscience, vol. 1, no. 1, pp. 31–68, 2002. [CHA 04] CHAUVET G., The Mathematical Nature of the Living World: The Power of Integration, World Scientific, Singapore, 2004. [CHÉ 12] CHÉRIAUX F., GALARA D., VIEL M., “Interfaces for nuclear power plant overview”, 8th International Topical Meeting on Nuclear Plant Instrumentation, Control, and Human-Machine Interface Technologies (NPIC & HMIT 2012), San Diego, USA, 2012. [CLA 10] CLANCHE F., GOUYON D., DOBRE D. et al., “Plateforme pour la conduite interactive et sûre”, 3e Journées Démonstrateurs en Automatique, Angers, France, p. 9, 2010. [CLO 15] CLOUTIER R., SAUSER B., BONE M. et al., “Transitioning systems thinking to model-based systems engineering: Systemigrams to SysML Models”, IEEE Transactions on Systems, Man, and Cybernetics Systems, vol. 45, no. 4, pp. 662–674, 2015. [DEV 95] DEVLIN K., Logic and Information, Cambridge University Press, Cambridge, 1995. [DIL 11] DILLENBOURG P., Trends in orchestration, Second research & technology scouting report, STELLAR Deliverable D1.5., 2011. [DOB 10] DOBRE D., Contribution à la modélisation d’un système interactif d’aide à la conduite d’un procédé industriel, PhD Thesis, University of Lorraine, 2010. [DUC 96] DUCROCQ A., “Le concept de sémantiel et phénomène d’émergence”, Symposium ECHO, Amiens, France, 1996. [DUP 12] DUPONT J.-M., LIEBER R., MOREL G. et al., Spécification d’un processus technico-physiologique de perception de fermeture et verrouillage d’un capot moteur en situation de maintenance aéronautique, Airbus-CRAN-LORIA Research Report, 2012, available at: https://hal.archives-ouvertes.fr/hal-00769223. [END 16] ENDSLEY M.R., Designing for Situation Awareness: An Approach to User Centered Design, CRC Press, Boca Raton, 2016. [FAI 12] FAISANDIER A., Ingénierie et Architecture des Systèmes pluridisciplinaires, Sinergy’Com Editions, Lezennes, 2012–2018.

76

Automation Challenges of Socio-technical Systems

[FAN 12] FANMUY G., FRAGA A., LLORENS J., “Requirements Verification in the Industry”, in HAMMAMI O., KROB D., VOIRIN J.L. (eds), Complex Systems Design & Management, Springer, Berlin, 2012. [FOR 61] FORRESTER JAY W., Industrial Dynamics, MIT Press, Boston, 1961. [FOR 94] FORRESTER JAY W., “System dynamics, systems thinking, and soft OR”, System Dynamics Review, no. 10, pp. 245–256, 1994. [FRA 14] FRAISOPI F., “La simplexité, ou ce que les modèles n’arrivent pas à saisir”, in BERTHOZ A., PETIT J.L. (eds), Complexité-Simplexité, Collège de France, Paris, 2014. [GAL 99] GALARA D., HENNEBICQ J.P., “Process control engineering trends”, Annual Reviews in Control, vol. 23, pp. 1–11, 1999. [GAL 06] GALARA D., “Roadmap to master the complexity of process operation to help operators improve safety, productivity and reduce environmental impact”, Annual Reviews in Control, vol. 30, no. 2, pp. 215–222, 2006. [GAL 12] GALARA D., GOUYON D., PÉTIN J.F. et al., “Connexion cluster: concepts and frameworks”, International Conference on Software & Systems Engineering and their Applications, Paris, France, 2012. [GAZ 00] GAZZANIGA M., IVRY R., MANGUN G., Neurosciences cognitives : la biologie de l’esprit, De Boeck Université, Brussels, 2000. [GIO 15] GIOVANNINI A., AUBRY A., PANETTO H. et al., “Anti-logicist framework for design-knowledge representation”, Annual Reviews in Control, vol. 39, pp. 144–157, 2015. [GIR 01] GIRIN J., La théorie des organisations et la question du langage, vérité, justice et relations. La question du cadrage dans une situation de service à EDF, CNRS Éditions, Paris, 2001. [GOL 09] GOLDSTEIN E., Encyclopedia of Perception, Sage, Los Angeles, 2009. [HAL 02] HALL J., JACKSON M., LANEY R. et al., “Relating software requirements and architectures using problem frames”, IEEE Joint International Conference on Requirements Engineering, Essen, Germany, 2002. [HAL 05] HALL J.G., RAPANOTTI L., “Problem frames for socio-technical systems”, in MATÉ J.L., SILVA A. (eds), Requirements Engineering for Socio-Technical Systems, Information Science Publishing, London, 2005. [HAR 03] HARTSON R., “Cognitive, physical, sensory, and functional affordances in interaction design”, Behaviour & Information Technology, vol. 22, no. 5, pp. 315–338, 2003. [INC 14] INCOSE, A world in motion, Systems Engineering Vision 2025 INCOSE Project Team, Produit technique INCOSE, 2014.

System-centered Specification of Physico–physiological Interactions

77

[JAC 97] JACKSON M., “The meaning of requirements”, Annals of Software Engineering, vol. 3, no. 1, pp. 5–21, 1997. [JIN 06] JIN Z., “Revisiting the meaning of requirements”, Journal of Computer Science and Technology, vol. 21, no. 1, pp. 32–40, 2006. [KOE 67] KOESTLER A., The Ghost in the Machine, Macmillan, New York, 1967. [KRO 14] KROB D., “Éléments de systémique-Architecture de systèmes”, in BERTHOZ A., PETIT J.L. (eds), Complexité-Simplexité, Collège de France, Paris, 2014. [KUR 03] KURTZ C.F., SNOWDEN D.J., “The new dynamics of strategy: Sense-making in a complex and complicated world”, IBM Systems Journal, vol. 42, no. 3, pp. 462–483, 2003. [KUR 06] KURAS M.L., A Multi Scale Definition of a System, Technical Report. MTR 06B0000060, MITRE, 2006. [LAW 10] LAWSON H.B., A Journey through the Systems Landscape, College Publications, London, 2010. [LEM 95] LEMOIGNE J.L., La modélisation des systèmes complexes, Dunod, Paris, 1995. [LEM 13] LE MORTELLEC A., CLARHAUT J., SALLEZ Y. et al., “Embedded holonic fault diagnosis of complex transportation systems”, Engineering Applications of Artificial Intelligence, vol. 26, pp. 227–240, 2013. [LEV 17] LEVALLE R.R., NOF S.Y., “Resilience in supply networks: definition, dimensions, and levels”, Annual Reviews in Control, vol. 43, pp. 224–236, 2017. [LIE 13] LIEBER R., Spécification d’exigences physico-physiologiques en ingénierie d’un système support de maintenance aéronautique, PhD Thesis, University of Lorraine, 2013. [MAY 18] MAYER F., “Exploring the notion of situation for responsive manufacturing systems specification issues”, IFAC-INCOM’18, Bergamo, Italy, 2018. [MEL 09] MELLA P., The Holonic Revolution. Holons, Holarchies and Holonic Networks. The Ghost in the Production Machine, Pavia University Press, Pavia, 2009. [MÉN 12] MÉNADIER J.P., FIORESE S., Découvrir et Comprendre l’Ingénierie Système, Cépaduès Éditions, Paris, 2012. [MIL 14] MILLOT P., Designing Human-machine Cooperation Systems, ISTE Ltd, London and John Wiley & Sons, Hoboken, 2014. [MIL 15] MILLOT P., “Situation awareness: is the glass half empty or half full?”, Cognition, Technology & Work, vol. 17, no. 2, pp. 169–177, 2015. [MOR 14] MOREL G., BOUFFARON F., NARTZ O. et al., “Vers un apprentissage itératif à l’ingénierie système basée sur des modèles”, 10e Conférence Francophone de Modélisation, Optimisation et Simulation, MOSIM’14, Nancy, France, 2014.

78

Automation Challenges of Socio-technical Systems

[MÜL 12] MÜLLER G., MÖSER M., Handbook of Engineering Acoustics, Springer, Dordrecht, 2012. [NUG 15] NUGENT P., COLLAR E., “The hidden perils of addressing complexity with formal process – a philosophical and empirical analysis”, in BOULANGER F., KROB D., MOREL G. et al. (eds), Complex Systems Design & Management, Springer, Dordrecht, pp. 119–131, 2015. [OAS 10] OASIS TECHNICAL COMMITTEES, OSLC core specification version 2.0, Technical Report, 2010. [PÉN 97] PÉNALVA J., La modélisation par les systèmes en situation complexes, PhD Thesis, University of Paris XI, 1997. [PÉT 98] PÉTIN J., IUNG B., MOREL G., “Distributed intelligent actuation and measurement (IAM) system within an integrated shop-floor organisation”, Computers in Industry, vol. 37, no. 3, pp. 197–211, 1998. [PÉT 06] PÉTIN J., MOREL G., PANETTO H., “Formal specification method for systems automation”, European Journal of Control, vol. 12, no. 2, pp. 115–130, 2006. [PON 11] PONTO C.F., LINDER N.P., Sustainable Tomorrow: A Teachers’ Guidebook for Applying Systems Thinking to Environmental Education Curricula, Pacific Education Institute, Olympia, 2011. [POU 13] POUVREAU D., Une histoire de la « systémologie générale » de Ludwig von Bertalanffy-Généalogie, genèse, actualisation et postérité d’un projet herméneutique, PhD Thesis, EHESS, 2013. [RET 15] RETHO F., Méthodologie collaborative d’aide à la construction de produits virtuels pour la conception d’aéronefs à propulsion électrique, PhD Thesis, Supelec, 2015. [ROL 06] ROLLS E.T., “Brain mechanisms of emotion and decision-making”, International Congress Series, vol. 1291, pp. 3–13, 2006. [ROS 12] ROSEN R., Anticipatory Systems, Springer, New York, 2012. [RUA 15] RUAULT J.R., Proposition d’architecture et de processus pour la résilience des systèmes : application aux systèmes critiques à longue durée de vie, PhD Thesis, University of Valenciennes and Hainaut-Cambrésis, 2015. [SUI 07] SUIED C., De l’urgence perçue au temps de réaction: application aux alarmes sonores, PhD Thesis, Pierre and Marie Curie University, 2007. [TAG 87] TAGUCHI G., System of Experimental Design; Engineering Methods to Optimize Quality and Minimize Costs, UNIPUB/Kraus International Publications, New York, 1987. [VAL 17] VALCKENAERS P., VAN BRUSSEL H., Design for the Unexpected: From Holonic Manufacturing Systems towards a Humane Mechatronics Society, Butterworth-Heinemann, Amsterdam, 2017.

System-centered Specification of Physico–physiological Interactions

79

[VAN 17] VANDERHAEGEN F., “Towards increased systems resilience: new challenges based on dissonance control for human reliability in Cyber-Physical&Human Systems”, Annual Reviews in Control, vol. 44, pp. 316–322, 2017. [VOI 18] VOIRIN J.-L., Model-based System and Architecture Engineering with the Arcadia Method, ISTE Press Ltd, London and Elsevier Ltd, Oxford, 2018. [WIE 48] WIENER N., Cybernetics or Control and Communication in the Animal and the Machine, MIT Press, Cambridge, 1948. [ZAS 08] ZASK J., “Situation ou contexte ?”, Revue internationale de philosophie, no. 3, pp. 313–328, 2008. [ZAY 17] ZAYTOON J., RIERA B., “Synthesis and implementation of logic controllers – A review”, Annual Reviews in Control, vol. 43, pp. 152–168, 2017. [ZHA 15] ZHANG Y., ZHENG H., CHEN B., et al., “A review of industrial symbiosis research: theory and methodology”, Frontiers of Earth Science, vol. 9, no. 1, pp. 91–104, 2015.

PART 2

Cooperation and Sharing of Tasks

Automation Challenges of Socio-technical Systems, First Edition. Edited by Frédéric Vanderhaegen, Choubeila Maaoui, Mohamed Sallak and Denis Berdjag. © ISTE Ltd 2019. Published by ISTE Ltd and John Wiley & Sons, Inc.

3 A Framework for Analysis of Shared Authority in Complex Socio-technical Systems

3.1. Introduction This chapter aims to present a framework for systemic analysis of shared authority and responsibility in complex socio-technical systems. To achieve this, we will use the case study of the field of air traffic management (ATM) that we can consider as a complex socio-technical system within which humans and technical systems cooperate in an interacting manner. In such systems, human operators have different roles and are integrated in different organizational or geographical cultures. Nevertheless, all share the same final objective, which consists of making air transport safe and effective. To improve the overall performance of this socio-technical system, a new generation of air traffic management systems is currently being developed with a view to making air transport safe and operational at a global scale. One of the challenges involved in the design of these new systems relates to their complex, dynamic and highly interconnected nature. In addition, technical systems are destined to become highly automatized so as to provide a comprehensive approach to traffic management and at the same time assist human operators with completion of their tasks. To this end, changing the level of automation does not necessarily mean replacement of tasks allocated to humans in current air traffic management practices, but rather modification or elimination of certain tasks and at the same time creation of new tasks [SHE 83]. In this way, automation of tasks can change the information that the operators will need in order to carry out their own tasks. This evolution raises the question of the utility of the information and the usability of suitable means of communication. When it comes to Chapter written by Cédric BACH and Sonja BIEDE.

84

Automation Challenges of Socio-technical Systems

designing complex socio-technical systems, the problem of automation extends beyond a simple transposition of functions between humans and machines. In a general manner, this would certainly lead to a definition of new roles and to profound changes in the socio-technical environment of human operators and their skills [CUM 06]. The introduction of remote towers represents a recent example of this type of new operations/systems. Indeed, the concept of remote towers allows air traffic control services (ATS) and aerodrome flight information services (AFIS) at airfields where this type of service is not currently available, or on aerodromes where it would be too difficult or costly to install and maintain conventional air traffic control services. In the context of the evolution of European traffic management, members of the SESAR program have developed a methodology to invent and evaluate levels of automation for air traffic management [SES 13b]. However, levels of automation cannot be defined without taking into account the link to questions of authority and responsibility. Our hypothesis is to consider that research work into authority and responsibility in complex systems requires a suitable approach, which should include a set of methods, tools and theoretical foundations that are adapted to the characteristics of complexity. Indeed, we consider that complexity is not a simple dimension on a continuum between something that would be simple on the one hand, and complex or chaotic on the other hand. We consider complexity as a research subject within sciences of complexity, and, as such, complexity has its own characteristics or criteria that have been brought to the forefront by sciences of complexity. Thus, complexity can be seen as a “paradigm of complexity” [MOR 15]. Using this paradigm of complexity, the notion of agents can be used to characterize an activity that can be carried out by a human and/or a technical entity within an organization [FER 03]. The paradigm of complexity can help to spread the notion of activity independently from the type of agent involved. To reach an expected performance, the agents can use information coming from their internal or external environment as a basis. They can also analyze or make decisions, implement and monitor the consequences of their decisions while having a multitude of cognitive and physical actions available to them. Achieving expected performances also depends on the availability of information related to the activity of other agents and knowledge of their possibilities of cooperation or collaboration. Lastly, situational, societal and organizational characteristics of the environments of agents determine the way in which the actions are carried out, in addition to the specific aspects of each entity such as past experiences or individual motivation factors. Given that human and technical agents collaborate in achieving a common performance objective [HOC 00], it is essential for human beings to be aware of their responsibility with regard to automated systems. In addition, humans must know to what extent they have control over automated systems or know to what

A Framework for Analysis of Shared Authority in Complex Socio-technical Systems

85

extent automation can carry out, guide or replace their tasks in acceptable conditions of safety and performance. In addition, humans can require the assurance that they will not be held responsible for a decision or for a dangerous measure that has been previously delegated to an automation. For a few years, several research groups have begun to study concepts of authority and responsibility in an aeronautical field. This research has allowed a set of conceptual definitions and a framework to be established, involving authority and responsibility. However, to this day, there is still no consensus and clear convergence between authors regarding the way in which authority and responsibility are structured with respect to each other. In the field of embedded systems on airplanes, Billings [BIL 97] has already evoked the link between authority and responsibility. Boy and Grote [BOY 12] have developed this approach in more depth and have shown that the concept of authority implies aspects of “control” over systems and also notions of “responsibility” and of “accountability” in system operations. Boy and Grote have also demonstrated that these concepts must be analyzed by considering several human and technical agents that are part of the global organization of air traffic management. According to these authors, there are various ways of organizing “authority”. Thus, they distinguish “delegation”, “distribution”, “sharing” and “trading”. To complete this approach, Miller and Parasuraman [MIL 07] proposed distinctions between various patterns of delegation of authority. Other entities like the US Navy [USN 18] have also proposed distinctions between different types of authorities and accountability. Thus, the US Navy identifies the “legal authority”, referring to the descriptions given in legal texts and/or military code, the “earned authority” that corresponds to mechanisms of social emergence of leaders, “moral authority” that refers to an individual conscience and to legitimacy to act outside a legal framework (which allows resistance movements). The US Navy also describes various levels of accountability in handing accounts in to an authority, mainly via legal and/or financial accountability. To this end, accountability is generally considered to be a counter-power to excess authority that can be requested by the various authority bodies such as military and/or civil justice within the US Navy. Thus, accountability would be exercised only in cases of infraction of a form of authority. Bhattacharyya and Pritchett [BHA 14] have also proposed a predictive model to analyze the notions of “role” and “responsibility” by using an approach based on various scenarios. They have established a set of definitions of autonomy to describe whether an agent can carry out a task in an independent manner; within this framework, authority is related to tasks that an agent must execute, and the responsibility refers to the results of the task for which they can be accountable, in the sense that an authority could ask them to report back. Lastly, the approach proposed by Straussberger et al. [STR 08] recommends completing the scenario-based approach with an approach based on organizational models.

86

Automation Challenges of Socio-technical Systems

Due to their direct impact on human behavior, it is essential to understand how authority, responsibility and accountability are designed and broadcast within a socio-technical system and how these concepts can be managed during an engineering process of these systems. In effect, engineering processes are able to provide an overall methodology to study and manage the impact of human performance on air traffic management. A process of this kind effectively exists in the SESAR human performance assessment process (HPAP) [SES 13a]. The HPAP firstly specifies how to define and evaluate problems and advantages related to human performance, leading to iterative improvement of air traffic management systems by managing recommendations and demands, and secondly how to apply unitary operational concepts as well as integrated applications of these unitary operational concepts. However, for the moment, the HPAP does not efficiently address problems associated with authority, responsibility and accountability as well as their dissemination in new concepts of air traffic management. In fact, the main current problem is the lack of shared understanding of the terminology and the processes that are capable of correctly deciphering authority, responsibility and accountability in such a way as to make them compatible with an engineering process. The main aim of this chapter is to present a proposed design and analysis framework that is capable of understanding authority, responsibility and accountability in complex systems of air traffic management. This design framework is focused on users and is capable of eliciting and resolving human factor problems before making new air traffic control concepts operational. It intends to increase awareness in designers and direct their thought processes with regard to notions of authority, responsibility and accountability and further promotes the difference between a systematic and a systemic approach. In addition, this work framework aims to guide the analyses and the impact of design through four levels: nano, micro, meso and macro, and finally, the use of this work framework will be illustrated by a preliminary analysis of visual separation operations on-board airplanes from the point of view of the cockpit, even though we consider that an approach of this kind can be applied to any field of air traffic management whatsoever. 3.2. From the systematic approach to the systemic approach: a different approach of sharing authority and responsibility The current view of shared authority is often characterized as systematic, which must be understood as a synonym of methodical. For example, in the military field, the chain of command can be an example of a systematic, linear and vertical (top down) approach to authority. This classic approach to authority is found in definitions such as: “Right to order, power to impose obedience. The authority of the superior over their subordinates (hierarchy)”.

A Framework for Analysis of Shared Authority in Complex Socio-technical Systems

87

A systematic approach to authority can address the aspects of authority, control or responsibility. However, this approach will make it difficult to describe the dynamics in place between these various aspects. An example of a systematic application of authority can be observed in a task to separate airplanes in categories of air space that will mainly be under the responsibility of an air traffic controller. In aviation, the authority is known as the place and/or the moment when someone has the authority to influence the course of events in the short term. In a similar way, in terms of human–computer interaction, control is understood to be the capacity to influence a situation while remaining in the interactive loop, therefore being informed of the states of human–system interaction and having the means to indirectly influence the current or future state of the system. This notion of control in aviation is similar to the notion of “user control” in the field of usability engineering [BAS 95]. These notions are related to the need to know what state the system is in and the capacity for the user to take control of the system, to the point of stopping it. This systematic approach to authority is, however, no longer suitable when it is a question of thinking about authority in systems that involve complex levels of automation. In effect, air traffic control can be considered to be a complex adaptive system in which a high number of agents can interact, adapt or even learn [HOL 06]. It is therefore appropriate to understand the design of air traffic management systems through the paradigm of complexity. Cilliers [CIL 98] has thus proposed a set of characteristics of complex systems that are relevant to the characteristics particular to aeronautics: – the number of elements is sufficiently high for conventional descriptions (e.g. a system of differential equations) to not only be unsuitable but also counterproductive for understanding the system. In addition, the elements of the system interact in a dynamic manner and these interactions can be physical and/or based on exchanges of information; – interactions of this kind are rich, meaning that every element or sub-system of the system is affected by and affects several other elements or sub-systems. Thus, aeronautics is generally described as a system of systems; – these interactions are nonlinear. Thus, small variations in inputs such as physical interactions or stimuli can cause significant effects or very significant changes to the outputs of the complex system. This phenomenon is often illustrated by the butterfly effect; – interactions take place as a priority but not exclusively between agents within a close environment and the effects of these interactions are modulated;

88

Automation Challenges of Socio-technical Systems

– all interaction implies direct or indirect retroaction. Retroactions can vary in quality and thus be positive or negative. This is also called recurrence; – complex systems are open and it can be difficult or impossible to define the limits of the system; – complex systems are far from equilibrium conditions. There must be a constant flow of energy to maintain organization and overall structure of the system; – complex systems have a history. They evolve and their past is jointly responsible for their current behavior; – the system elements can ignore the behavior of the system as a whole. They only respond to the information or to physical stimuli that are made available to them locally. Based on the consideration that air traffic management is a complex system, a systemic approach to authority, to responsibility and to accountability allows us to consider that these aspects affect the socio-technical system as a whole, instead of being restricted to sub-parts, in a dynamic context, and in interaction with multiple factors and agents. The structure of this multi-agent work framework corresponds to this systemic approach since it allows representation of the dynamic nature of tasks between human and technical agents, as well as their organizational and societal context. For a systemic approach, a clear distribution of the allocation of authority, responsibility and accountability is necessary, even though this allocation can take place at different levels and depends on the dynamic nature of situations. This can occur at a detailed level of a function or of a task, or it can occur on an organizational or societal level. We use the concept of “action” [GOL 96] to characterize behavior of agents that is orientated towards or by specific objectives. To determine the levels to which the action is connected, the terms nano, micro, meso and macro will be used in the rest of this chapter. The sections that follow describe how these elements are included in the context of a systemic approach to design, in other words, in the context of the work framework that we propose. 3.3. A framework of analysis and design of authority and responsibility The framework of analysis and design that we propose constitutes the summary of a set of different projects in the SESAR program, in which questions involving allocation of authority and responsibility are recurrent subjects of study, leading to lively debates between air traffic controllers, users of air space and aeronautical engineers at various levels of design, ranging from construction of human–computer interactions (HCI) to the development of regulations at an international scale.

A Framework for Analysis of Shared Authority in Complex Socio-technical Systems

89

The objective of our framework for analysis and design is to summarize the main dimensions that should be taken into account in the design of allocation of authority/responsibility of future operational concepts in air traffic management. These dimensions are particularly important for the design of future concepts that involve new technical perspectives, and take into account dynamic and adaptive concepts that mobilize, through construction, different forms of cooperation between humans and automation. In other words, this framework for analysis and design is an initial attempt to improve considerations of the problems of authority, responsibility and accountability during the design of future aeronautical concepts and the integration of ground and airborne elements. An initial dimension established a distinction between authority, responsibility and accountability by basing itself on the concept of action. A second dimension refers to the levels of actions in their environments. A third dimension relates to the type of action patterns between agents. In the next sections, we describe these different dimensions. Each element of the design framework is described and illustrated using examples in order to support the reader’s understanding. 3.3.1. Actions in a perspective of authority, responsibility and accountability To allow adequate analysis of the impacts on human performance, a concise understanding of authority, responsibility and accountability is necessary in order to characterize the various components of interactions between humans and systems. We will call on the concept of action to allow us to distinguish between these three notions because the concept of action focuses on general human behavior directed by goals and targets. By means of this concept, individual and motivational factors determine concrete actions. However, we do not just use concepts of tasks and activity [LEP 83]. Indeed, by definition, these concepts associate the objectives of the tasks with the means assigned to accomplish these tasks; however, they do not provide a framework for analysis of overall behavior that is able to capture the dynamic and individual motivations behind the actions that are effectively carried out by humans. For more details about the aspects that impact human actions, interested readers may refer to more specialized literature such as Gollwitzer and Bargh [GOL 96]. The actions take place in a certain context and are determined by the characteristics of the agents and of their environment [GOL 96]. In a temporal dimension, the actions can be considered to be the perspective of planned actions, the process envisaged for execution of actions and objective completion of them. In a similar manner, time organization can also be transferred to notions of authority, responsibility and accountability. An action can be planned with an objective in mind, an action can be carried out in order to achieve an objective, and the result of

90

Automation Challenges of Socio-technical Systems

the action is verified with respect to the initial objective. Actions can be described on a very detailed level or also in a combined manner and they have been introduced into the design of automation by different authors [HAR 03]. The action model proposed by SESAR guides for automatization provides a standardized work framework for analysis of actions in this way by describing the information acquisition [I-AC], information analysis [I_AN], action selection and decision [AS_D], and action implementation [A_I]. For purposes of simplification, we refer hereafter only to a “cycle of action” without dividing it up further. In the framework for analysis and design that we propose, the concepts of responsibility, accountability and authority are clearly linked to the concept of action. With this in mind, the description of concepts hereafter should be understood as a prerequisite to the use of the analysis and design framework. The objective of these descriptions is to allow common understanding and to establish a clear distinction between these various concepts in the context of air traffic management. Responsibility is generally understood as an “obligation for an operator to carry out a task that has been explicitly assigned to them in order to guarantee security of operations” [SKY 13]. Consequently, responsibility refers to aspects where actions have been planned a priori. When the actions are planned for a specific agent, this is generally formalized in the form of tasks expected in the description of a role or of a job. These definitions may come from regulatory bodies and/or airline companies. For example, an agent could be responsible for the overall security of a flight (ordinarily this responsibility falls to the onboard commander, as described in the “rules of the air”), another agent of the tactical prediction of conflicts in the air (this responsibility generally falls to the traffic alert and collision avoidance systems (TCAS) from the onboard standpoint, and to the short-term conflict alert system (STCA) from the point of view of air traffic control), or another of making reliable weather data available (from the onboard point of view, this responsibility falls to the airplane’s weather radar). In the aeronautical context, the emergence of complex systems requires a clear allocation of the responsibilities between the various agents that are involved. We can consider that not all the planned actions are formalized, but that the knowledge of planned actions can be shared implicitly. Even though the responsibility is often defined from a regulatory standpoint, organizations can in addition specify the responsibilities concerning specific organizational objectives. “Supervision control” refers to the responsibility of monitoring the results of a complex process that implicates one or several agents [SHE 92]. Supervision control is closely linked to the concept of “user control” that we encounter in the field of HCI [BIL 97]. This responsibility should be accompanied with means of controlling or short-circuiting/stopping a system under supervision in the event of its failure. For example, supervision control of an air traffic controller can refer to their

A Framework for Analysis of Shared Authority in Complex Socio-technical Systems

91

responsibility to avoid a traffic conflict; whereas the supervision control of a pilot can refer to their responsibility to make their airplane leave on time or to prevent any delays postponing the departure time. Accountability evokes instead the action that is effectively carried out and its results, such as the obligation to demonstrate accomplishment of a task to another agent (a federal administration, public authorities, employees or even clients) in response to an action; this is expressed as justification for an action or a decision that is made. Accountability is therefore characterized as a situation a posteriori to actions that are already carried out. Accountability is located on a level where an agent calls itself into question in order to justify the elements that have led them to carry out actions and decisions in a specific manner. Justifications of this kind can be related to a system of reporting. Accountability can also be envisaged in the framework of a system of rights, which will not be developed further in the framework of this chapter. Responsibility and accountability should be considered to be conditions a priori and a posteriori. Authority is mostly related to the power to act and is thus associated with the conditions for completion of the actions permitted by design and determined by their dynamics and their contexts. At the same time, the actions that are really carried out are impacted by the specifications of technical systems that can lead them to a certain level of autonomy. As has already been identified by Boy and Grote [BOY 12], authority is related to the notion of control. Management of an action based on authority often requires decision-making. For example, a pilot knows his or her responsibility (the tasks supposed to be accomplished), as well as at what time they have control and therefore responsibility and the possibility of taking back control over automation in order to carry out actions in acceptable conditions of security and performance. The power to act also depends on the skills of the agent, their effectiveness, their expertise and their knowledge; however, it may also be directed by the legitimacy of an agent to carry out an action. The dynamics of situations can also lead an agent to take power to carry out an action for which no responsibility has been previously defined. These situations can lead to a notion of “emerging authority”. In summary, while responsibility is a formalization of the actions expected by the agents or the organizations a priori to the execution of actions, accountability for its part can be seen as a formalization of the impacts of actions concretely brought to the attention of other agents or organizations. Authority must be included as a process of execution of the actions themselves, but the way in which the actions are carried out will be determined by responsibility a priori and accountability a posteriori, and the coherence between these concepts, related to the action, must be guaranteed during design of new solutions of air traffic management. Research into

92

Automation Challenges of Socio-technical Systems

the coherence between the responsibility, authority and accountability must be looked at in particular detail if new automation is envisaged in future concepts. Indeed, in this case, execution of the actions could have an impact on the patterns that define the allocation of authority, responsibility and accountability. Figure 3.1 shows the relations between these concepts from the point of view of the agent. These relations can be characterized by anticipation loops and retroactions. 3.3.2. Levels of authority and responsibility Since agents in complex systems carry out actions in collaboration and cooperation with other entities, it is therefore important to describe the organization of these entities in more detail in order to understand the overall impact of their actions. Actions within a complex system can be carried out by human or technical agents when it is a case of intervening on equivalent spatial, temporal and social levels. These can be characterized on a scale of four levels ranging from the nano level, the most detailed, to the macro level, which is the highest. This differentiation of complex systems into levels is quite common in different disciplines such as economy, sociology or ecology. In line with the ecological model of human development that was introduced by Bronfenbrenner [BRO 79], the micro level covers types of activities and roles in a specific context that can be considered only at the individual scale. The meso level relates to actions in which an individual is engaged for a certain time duration (e.g. the duration of a commercial airplane flight). The macro level involves a set of cultural and social values that are going to exert a strong influence on individual behaviors. An agent can therefore be connected to these various levels at the same time. As a function of the level involved, it will have different effects and consequently the description of authority, responsibility and accountability will be carried out via characteristics that correspond to the level considered. The important thing to know, when we create new concepts, is at what level the responsibility is defined and to whom the agent will be accountable. The following sections propose instantiations of the different properties of the four levels that we consider in the field of aeronautics. The macro level corresponds to social groups of large amplitude such as nations, international institutions, cultures or societies (e.g. Eastern and Western societies). The macro level typically prescribes the responsibilities of socio-technical components of aviation as a whole. The aeronautical authorities (e.g. military authorities, civil authorities, the FAA, EASA) generally finalize the aeronautical rules and major actors in aeronautics such as the aeronautical industry, accident investigation offices, standardization groups or even the International Civil Aviation Organization (ICAO) that define the principles of high level operation such as the

A Framework for Analysis of Shared Authority in Complex Socio-technical Systems

93

separations between airplanes, the organization of airspace or even free-route spaces. The rules defined at this level are going to have an impact on the actions of agents in a direct manner (by application of the rules or regulations) or indirectly (by application of the rules defined on a meso level). On a macro level, it is possible to identify problems of definition and allocation of actions to be carried out between agents at this level, which are entities in themselves. The action time at the macro level is long and is going to involve many contributors whose decisions and actions could have an indirect impact on human activities, technical systems, organizations and the environment (e.g. the reduction in emissions of carbon dioxide, the air traffic capacity). The meso level refers to social organizations of intermediate size that can be structured in organizations of groups of agents that execute high level rules defined on a macro level or that share common activities. These organizations (e.g. airlines, industrial collaborations, pilots’ associations or air traffic controllers) define global missions that structure activity. The meso level mainly refers to management of the phases of planning and execution of assignments and commercial flights. Responsibilities at this level involve an entire sector, a flight, an assignment or many other types of work. The assignments defined at this level are going to have a direct impact on the actions of agents in charge of the execution of the activity. Problems related to both the allocation and completion of tasks can be identified at this level. The time scale at this level follows the same logic and is based on the time for an assignment or for a flight. Environmental considerations are also based on the scale of an assignment and are mainly supported by forecasts (e.g. the density of air traffic, weather conditions). The micro level refers to unitary agents that can be human or technical. The events produced at this level only relate to the execution phase and not the planning phase. An example of an agent at this level could be a system of alerts that can provide information and specific guidance suitable for a local context. This level concerns the allocation of authority between humans and machines, as is the case for example with the TCAS. At the micro level, we can consider the properties of agents such as their skills, their strengths and weaknesses and their authorities and responsibilities to carry out actions. At this level, the timescale is tactical (e.g. less than 10 minutes of anticipation) and the environmental considerations are based on detection by sensors. Lastly, the nano level refers to specific properties (e.g. concrete decisions, elementary actions such as the fact of pressing a button) or specific components (such as a system function) that are going to produce actions. At a nano level, the allocation of tasks between agents can also be envisaged. At this level, the roles correspond to sets of actions that are linked to each other and that can be executed by one and the same agent that has several roles. Effectively, an agent can play

94

Automation Challenges of Socio-technical Systems

different roles that depend on the societal context in which they are involved. The time scale at this level is immediate, as it is for the environment. Figure 3.1 shows how the various levels are related to the actions based on an example of distribution of authority and which can be instantiated case by case by means of these levels. These various distributions of authority can evolve over time. The example of the Uberlingen accident (2002) reflects how organization of the responsibility in relation to the TCAS (Traffic Alert and Collision Avoidance System) was carried out. This technical agent triggers the appearance of instructions to the pilot who has the authority to execute a tactical avoidance maneuver. Before the Uberlingen accident, there was no harmonization between the Russian and German procedures at a macro level. One procedure gave the authority to make a decision to the TCAS agent; the other giving authority to the air traffic controller. This lack of harmonization led the pilots and the controller, at a nano level, to follow different and non-coordinated instructions during the execution of the action that led to the collision of two airplanes mid-flight. This means that an event at a nano level can lead to modification at a macro level, which will have a positive retroaction at a nano level. Figure 3.2 shows the dynamic interaction between the levels by basing itself on the case of the evolution of shared authority related to the TCAS following the Uberlingen accident. Another example is related to the events of September 11, 2001, which have drastically changed the cockpit access procedures during the execution phase of a commercial flight. After these events, at a macro level, the regulatory bodies (the FAA and the EASA) asked the airlines to install an access to the cockpit that is capable of resisting intrusions for a period of 20 minutes (meaning the average time required to reach an airport). Nevertheless, these regulatory bodies have delegated authority to airlines, at a meso level, for implementation of suitable technical measures, without specifying any solutions to adhere to. This has allowed airline companies to manage this new constraint with respect to their operational contexts. Companies have then delegated responsibility to pilots at a micro level to manage access to the cockpit on a case by case basis. At a nano level, technical solutions put in place can vary depending on the company (e.g. control of commands or of entry codes), this has an impact on the allocation of authority to agents (e.g. depending on the position of a command control in the cockpit, the cabin personnel will or will not be able to access the cockpit). In this case, management of the risk of intrusion in the cockpit was carried out in a systematic manner with a view to delegating vertical authority. Even though we may believe that this approach is very effective, it however does not adapt itself very easily to the diversity of operational situations. Taking account of the variety of agents and situations, alternative solutions could be imagined that have a different impact on the allocations of authority and of responsibility at micro and nano levels.

A Framework for Analysis of Shared Authority in Complex Socio-technical Systems

Accountability

Authority

Responsibility

4

2

1 3 Na no

Na M icr o

Na no

no M

M

es

icr

M

icr

o

o M es o

o M ac ro

Expected actions

M es o M

ac

ro

Actions carried out

M

ac

ro

Consequences

TIME Levels Actions within the levels Interactions between the levels 1 Interaction between the allocation of responsibility and the action: the Responsibility characterises the expected action and transforms it into an intention for an action. 2 Interaction between the action carried out and the responsability: the Authority determines the power to carry out the action. The action carried out refers back to the responsibility for the action. 3 Interaction between the actions carried out and the accountability: the accountability is related to the actions carried out and their consequences. 4 Interaction between the accountability and the responsibility: the accountability is established in relation to the responsibility.

Figure 3.1. Relations between the authority, responsibility and accountability

95

96

Automation Challenges of Socio-technical Systems

1

2 A B

Na

no

C Mi cro

Na

no

Na

3

Mi cr

o

Me

so

Me

so

Ma cr

Ma cr

o

Event

o

Enquiry

no

Mi

cro

Me

so Ma

cro

Adaptation

HISTORY Levels Actions for a context defined within levels Interactions distributed over time A Agents involved The agents involved are the TCAS, Controller and Pilots

B C

1 Interactions during the events To obtain correct actions, the description of the responsibility within a level (eg. with different cultures and national regulations) and/or between several levels, should be compatible. If these descriptions are not clear, then they lead to unclear actions.

2 Interaction between the result of the action during the event and its consequences studied during the enquiry. The Accountability for the results of the Action at the Nano level can change the attibution of Responsibility at the Macro level.

Interaction between the results of the enquiry and the adaptation. The establishment of new

3 demands for allocation of Responsibilty at the Macro level can change the results of the action at the Nano level.

Figure 3.2. Dynamic relations between the notions of authority, responsibility and accountability following an event across the different levels

3.3.3. Patterns of actions in relation to authority and responsibility Based on the attributed authority/responsibility, the execution of actions between agents can be described by different patterns, as has already been proposed by Boy and Grote [BOY 12] and more widely detailed in the following sections. In

A Framework for Analysis of Shared Authority in Complex Socio-technical Systems

97

most cases, we could consider that an agent is accountable if they are responsible. However, this basic hypothesis may be called into question during design of a new allocation of authority and, in this case, incoherencies may emerge when the pattern of allocation of responsibility is not compatible with the pattern of allocation of authority and accountability. To simplify explanation of the principle of allocation patterns, we will develop these by highlighting authority, because this allows improved illustration of the dynamic of patterns via concrete examples and connections to responsibility and accountability. However, we can postulate that equivalent patterns, based on the same principles, can be established concerning responsibility and accountability and serve as a basis for analysis of the coherence between these concepts during design. Nevertheless, to date this has not been detailed. 3.3.3.1. Authority sharing Authority sharing involves a context of tasks in which agents must reach a common objective and have a given objective. In this situation, at least two agents make separate decisions but are focused on achievement of the same objective, and each agent controls their own actions and is responsible for the consequences of their decision-making. Although they have independent actions to carry out, these actions should be synchronized in such a way as to achieve the common objective. To this end, awareness of the mutual situation is essential. Behavior of these agents involves joint use of resources. An example of a resource could be the trajectory of an airplane. Consequently, agents act cooperatively on this trajectory because they pursue the same objective and this objective will be reached by sharing a common representation of the current situation. In order to cooperate, the agents should be able to share a common reference framework [HOC 01] to provide awareness of the appropriate situation. This pattern of authority is related to the joint actions and distributed cooperations. Cooperation is finalized because it aims to reach a common objective. This involves distribution of tasks and sub-tasks among the various agents. However, this allocation pattern can be modified in line with modifications to the environment. Cooperation requires preparation in order to allow an approved representation of each agent in the objective to be achieved and in the manner in which it is reached. Elaboration of this reference framework is made essentially through communication and allows operators to synchronize themselves at a cognitive level, in order to attribute representations to each agent concerning the objectives to be reached and the way in which they are reached. Then, the agents synchronize themselves in their actions and coordinate the actions to be carried out.

98

Automation Challenges of Socio-technical Systems

Difficulties can appear when agents must share a common reference framework. Effectively, these agents may not have the same understanding and/or interpretation of the situation or state of systems. The common reference framework is directly linked to the concept of mediation because it is a means of sharing a common and appropriate understanding of a current situation in such a way as to be capable of cooperating. As a result, the common reference framework is central in authority sharing because each agent needs to be able to understand what the other says and does. In a context of authority sharing, there is a goal/objective shared by all the agents involved: a common goal. In order to achieve this common goal, each agent will have particular tasks (whatever this task is: action or decision-making). Since there is an interdependence between the agents and therefore between the tasks, the various agents involved must cooperate to achieve the overall objective. Furthermore, each of their actions/decisions must imperatively be coordinated with the others, in other words, these agents must be capable of synchronizing themselves. Since each agent involved in achieving the common objective must carry out the tasks that are assigned to them or for which they are responsible, responsibility of each actor is engaged in one part of the overall action. In the context of an event that is going to put the safety of operations at risk, each agent would become accountable and would need to be able to justify and explain why they acted in one way or another and which would lead them to make a particular decision. When authority is shared between the human and technical agents, the presence of an appropriate guidance that represents the internal state of systems and the progression of process then becomes compulsory at a micro/nano level. This guidance helps the agents to synchronize their actions in a context in which the tasks are interdependent. This guidance will also contribute to constructing a common reference framework and conscience of the appropriate situation. 3.3.3.2. Authority distribution Authority distribution is a pattern of authority that makes reference to situations in which functions are distributed among a set of agents. This pattern can be related to the concept of division because authority is divided, in the sense that an agent is in charge of a specific task, another agent of another task and so on. The pattern of distribution of authority is also directly linked to the allocation of functions or, in other terms, to the more classical design of automation. The prerequisite for implementation of a pattern of distribution of authority, during an automation, is a task analysis. This is essential for a task or a function to

A Framework for Analysis of Shared Authority in Complex Socio-technical Systems

99

be allocated to an identified agent. To do this, the tasks and cognitive functions must imperatively be described precisely. At this stage, it would make sense to look at the impact of automation on global interactions, coordination and synchronization in the global socio-technical system. To design the distribution of authority, it is firstly of use to carry out a precise analysis of the tasks and activities in order to consider all the tasks carried out by human operators in a normal situation. This analysis need not take into account tasks sensitive to context and the abnormal situations, but rather the variability of the situations that the agents have to manage. To this end, in the design phase, it is important to precisely define cases of use that are representative of the variability of contexts and tasks. Moreover, since the distribution of authority involves automation, it can be thought about locally, but must be coordinated between the various agents and therefore between the attributed functions. Although the tasks and the authority are distributed, these tasks are included in a work flow. To this end, if certain tasks or functions are automated in an uncorrelated manner, we will be able to expect problems to occur in the work flow due to interdependence of tasks. Since the results of a specific task can be used as a contribution to another, it is necessary to think about automation in a holistic manner. The most significant factor in a distribution of authority is to correctly take into account the relations between the various functions as well as the functions themselves. Consequently, the functions of automation must be jointly defined between themselves. Allocation of functions is a process that is complex to address, because once the initial organization of work is transformed into a new organization, several new functions are going to emerge, which will need to be discovered and taken into account in the automation process itself. Effectively, if the automation is considered to be a multi-agent process in which certain functions that were previously executed by humans are transferred to machines, then new functions devoted to humans will inevitably occur, which will naturally emerge from new types of interaction. The essential question is to know how to make these new emerging functions explicit. To achieve this, methods dedicated to this question must be optimized, as it is difficult to address this question for the moment. Regarding the pattern of authority, the distribution of authority in itself, it will only be possible to determine it once the allocation of functions has been completed. In this case, each agent will have a task that will correspond to a particular function that has been allocated to it and for which they will be

100

Automation Challenges of Socio-technical Systems

responsible. Consequently, this agent will have the obligation to carry out this task correctly. At the same time, they will have the obligation to demonstrate completion of their tasks that will cover their accountability. For example, in an aeronautical context, there can be failures in certain agents in the general flow of work. Let us look at the example of a strike movement by ground agents that are responsible for refueling airplanes with kerosene. Generally, these agents are part of airport operations and are not directly associated with a particular airline company. Nevertheless, these agents are essential for an airplane to leave on time, and if these agents cease their work, then the airplanes run the risk of departing late. Nevertheless, the regulations of commercial aviation (on a macro level) require airlines to inform their passengers of late departures that they are believed to be responsible for. In other words, airlines are accountable, with regard to their client, for information about a delay that is the consequence of non-completion of an action for which the airline is responsible. In these cases, we are able to observe a systematic/direct approach of the relationship between the responsibility, authority and accountability. In this type of situation of striking ground agents, involving delays, the airline cannot be held responsible and therefore accountable for information given to their clients. Nevertheless, a systemic pattern of distribution of authority could in this case bring to the forefront a decorrelation between responsibility and accountability, because even though the airline company is not at the origin of the airplane’s delay, it remains nevertheless responsible for the wellbeing of its clients and consequently accountable for information about the reasons for delay. Using this example of use, we can see how an inadequate distribution of authority, responsibility and accountability can lead to situations that do not make sense from a global point of view and that may appear to be absurd from the point of view of certain agents, as is the case with the passengers of the airline company that will have a late airplane, from their point of view unjustified. In return, this will have a negative retroactive impact on the airline’s brand image. 3.3.3.3. Authority delegation This pattern of authority evokes a situation in which an agent transfers its authority to another agent to carry out an action. This supposes that this other agent possesses the appropriate skills, expertise and knowledge to execute the action that is demanded of them. It is also important, in this case, that the action that is delegated is defined clearly and that the two agents have a shared representation of the prerequisites for carrying out the action and its consequences, and in certain cases of the process involved by carrying out the action. Moreover, the agent initiating the delegation

A Framework for Analysis of Shared Authority in Complex Socio-technical Systems

101

of authority may or may not maintain responsibility for execution of the delegated action. Delegation implies the notion of contract that must be managed by the delegating agent. We consider that if the delegated agent executes the terms of the contract for the delegator of the action, the latter must verify that the contract is executed correctly by the delegated agent. In the end, the delegating agent will always remain accountable for performance of the delegated agent. In the case where the delegated agent does not respect execution of the contract, the delegating agent will nevertheless remain responsible and accountable for the non-execution of the action. In this latter case, only authority is delegated. When an action is more complicated to carry out or it surpasses human capabilities of a first agent who carries out the action, part of this action/task can be delegated to a second agent in order to better carry out the action as a whole. Each time an agent delegates an action/task, they lose direct control over the execution of this action, they lose authority over it. Nevertheless, they remain responsible for delegated actions, and for maintaining a form of super-control, the delegating agent will have to go through the process of a delegated task management activity. Thus, the agent who delegates will imperatively need to manage the coordination between what the delegated agents do and their own activity, in such a way as to ensure an overall performance. To date, air traffic controllers have full authority to send orders to airplanes in order to provide tactical instructions and clearances. An exception to this authority was introduced when TCAS were installed on-board airplanes, delegating the authority to the TCAS agent to provide tactical instructions to pilots. This delegation took place in order to avoid confusions during decision-making processes. When two airplanes enter into conflict, each of the TCAS of the two airplanes provides information on the surrounding traffic and advice about resolution of the conflict. For example, a TCAS is going to ask the pilot to gain altitude and the other TCAS will ask the pilot to descend. These orders are coordinated. When a TCAS gives an order to the pilot, the latter must carry it out to avoid a collision with the other airplane by losing or gaining altitude. In this case, the action regarding decision-making, to resolve the conflict, has been delegated to the TCAS who must indicate to the pilot the procedure to follow. However, the pilot maintains the authority over execution of the avoidance maneuver. Responsibility has also been modified with integration of the TCAS. The TCAS is now responsible for providing an “acquisition of information” and a correct “analysis of information”, as well as a suitable and coordinated “decision and action selection”. Pilots remain responsible for correctly carrying out the action based on a decision made by the TCAS agent. Figure 3.3 presents an example about the way in which this delegation of authority can be presented.

102

Automation Challenges of Socio-technical Systems

TCAS

Controller

R

Acquisition of information

Acquisition of information

R

Acquisition of information

R

Analysis of information

Analysis of information

R

Analysis of information

A Selection of R decision & action

Implementation of action

Pilot

Selection of decisionA R & action Implementation of action

Selection of decision & action A Implementation of action R

Task completion process Delegation

A Authority

R Responsibility

Figure 3.3. Example of characterization of the delegation of authority in the case of conflict resolution by TCAS

3.3.3.4. Contractualization of authority Authority trading corresponds to the situation where two or several agents establish and steer contracts to define the conditions of the authority patterns described in the previous sections. This means that authority trading restricts the conditions of exercise of authority in terms of time and this exercise of authority can be reallocated in real time as a function of the operational context. Such contractualization implies planning the exercise of authority and consequently it increases the predictability of the situations. Contractualization also implies adaptability to the resolution of problems in real time. Contractualization of authority can be managed on a macro and meso level within organizations, in such a way as to increase the predictability of operations, but it can also be managed locally (on a meso and micro level) in order to allow an adaptability to local specificities in space and in time. To this end, management of contractualization of macro and micro authority can sometimes become contradictory, and hence contractualization of authority must be orientated by principles and criteria such as safety and performance. This contractualization can be established with the issue of a flight plan and/or business trajectory in the framework of 4D operations. In this case, airline operations establish a flight plan that is provided to the central unit of air traffic flow management that will transfer this flight plan to the relevant control authorities who will confirm its impact and ensure that it is included in the overall management

A Framework for Analysis of Shared Authority in Complex Socio-technical Systems

103

of planned air traffic. During preparation for the flight, the pilots will insert the approved flight plan into the onboard computer (Flight Management System – FMS) and check that this flight plan remains suitable depending on the planned and/or known environmental context. In this way, the entire organization is involved in this planning and this flight plan is made in such a way as to improve the predictability, environmental impact, commercial profitability and fluidification of the circulation. From a tactical point of view, the flight plan remains a reference framework for evaluation of the separations between airplanes to be applied and for planning the maneuvers between airplanes in order to prevent collision conflicts. If an unexpected event occurs during the flight execution phase, such as a weather phenomenon on the airplane trajectory, a contractualization between the team and air traffic control will have to take place, which will require additional communication tasks in order to finalize the best avoidance strategy. However, for their part, the air traffic controller must evaluate the impact of this strategy on the surrounding traffic and their own flight plans. Thus, the initial conditions of authority contractualization can be reviewed while the flight is executed.

3.3.4. Dynamic relations between the dimensions of the analysis framework This section describes the dynamic relations between the dimensions of the framework for analysis and design. The first dimension of the analysis framework characterizes the way in which the responsibility, authority and accountability are related to the way in which the action is carried out. The second dimension illustrates how these three components are spread out across four levels that designate the social environment of agents. The macro level distributes the prescriptions of responsibility and authority and defines the roles of agents, their profile (their capabilities, the level of reliability, the level of training, etc.) and the agents that must interact with each other. The meso level concerns the entire management of a mission in which the pilots, air traffic controllers, flight supervision centers (OCC) and automation is involved. The meso level allows authority to emerge, as a function of the context and under supervised control. At this level, responsibility and authority can become an adjustment variable or an adaptive factor to take into account the variability of contexts and operational environments more correctly. The micro level relates to the responsibility and the authority allocated to an agent who will be able to give direct orders to an airplane. The nano level makes reference only to the selected actions. The last dimension of an analysis framework considers that responsibility and authority follow various interaction templates between the various agents involved in the overall and completed actions.

104

Automation Challenges of Socio-technical Systems

The essential aspects to consider in a systemic approach to authority are the dynamic relations between the different elements of each of the dimensions, described below, due to the temporal and situational evolution of active agents and environments. Similar actions will be able to be associated with different characteristics of responsibility/authority/accountability, at different macro/meso/micro/nano levels, or follow different templates for interaction as a function of the context and of the operational environment. Nevertheless, the dynamics involved, although they are adaptable, present completed characteristics that may be the subject of an analysis and guided design. The definitions of responsibility and accountability must be relative to the action carried out. For this reason, the variability in this action must be well understood. The methods in the field of ergonomics and human factors, such as the methods of cognitive modeling, modeling of tasks/activities and organizational modeling, allow us to understand and better master the variability of actions in organizations. However, it is also important to understand to what level a certain authority and responsibility refer (macro or micro). Lastly, the most efficient interaction template between sharing/distribution/delegation/trading has to be identified and formalized in order for the performance expectation to be achieved. Moreover, communication and cooperation between the various agents should be maintained, supported by interfaces (logical or even HCI) and means between and within the various levels. In order to avoid the appearance of problems related to human factors during operations, synchronization between these various elements should be ensured when new operational concepts are introduced. This would allow incoherencies or differences to be avoided between these various elements in relation to the final action of agents. The main challenge for design resides in the articulation of the understanding of actions currently carried out and linked to the action cycle with their allocated authority and the definition of their responsibility and accountability.

3.4. Management of wake turbulence in visual separation: a study of preliminary cases For a better understanding of how this analysis framework would concretely help to design new concepts from the point of view of authority and responsibility, we will conduct an analysis of a case study in this section. This case study is linked to management of the risk associated with wake turbulence in a visual separation during an approach from a pilot’s point of view.

A Framework for Analysis of Shared Authority in Complex Socio-technical Systems

105

In the United States, visual separations between airplanes on their final approach are a daily activity, which allow airports to increase their capacity to absorb air traffic. These separations consist of maintaining a visual separation with a previous airplane by using direct visual information from this airplane (called a target airplane). This type of operation has a significant impact on the responsibility of each agent. Indeed, when the crew of an airplane accepts to maintain a visual separation, the responsibility of this separation is delegated to the crew and is no longer within the remit of the approach air traffic controller. Visual separation allows the separation between two airplanes following each other during an approach to be reduced with respect to the minimal separation of standard operations (in other words, the categories of separations due to wake turbulence). Visual separations on an approach are not, for the moment, supported by an onboard system and are only supported by the trained members of the airplane’s crew. In other words, the pilot’s task consists of analyzing the distance that separates them from the preceding airplane, taking into account the contextual information that can guarantee that their own airplane does not encounter wake turbulence from the preceding airplane and, very obviously, prevent all risk of collision with this airplane.

Figure 3.4. Application of the framework for analysis of responsibility and authority to a concept of assisted visual separation

106

Automation Challenges of Socio-technical Systems

Figure 3.4 shows the overview of a concept of visual separation assisted by an onboard system, by using the framework of analysis of authority and responsibility. For each level and action, responsibility [R] and authority [A] are identified. Figure 3.4 shows all the relationships in the concept as it could be set up, without thinking about the specific details of execution or those related to the mission. 3.4.1. At the nano level At the nano level, the detailed actions are analyzed during the execution phase, since one or several agents can be required to lead actions towards the same objective. As a result, the most suitable item on this level is not the responsibility, because it is allocated to an agent defined within other levels. However, it is above all the authority that will be the most suitable at this level. Indeed, the authority to decide on the most suitable distance to be maintained with respect to the target airplane is given to the pilot. At this level, it is possible to identify a potential negative impact on human performance in carrying out this action, because today there are no tools supporting this task. It is possible to raise the question of the interest that a human and/or a machine would have in analyzing or deciding on the distance and the adjustment speed to maintain as a function of the behavior of the target airplane. A situation of this kind would imply a delegation of authority by the pilot to an onboard system. Figure 3.4 shows the ADS-B as a potential means that could be used in design of a tool of this kind. 3.4.2. At the micro level In controlled air spaces, the responsibility for separations between airplanes is allocated to air traffic controllers. However, once a visual separation is accepted by a crew, the pilot becomes responsible for this separation. The pilot has the authority to accept or not accept the task of visual separation. The separation is not under their responsibility because they are not obliged to accept, if we trust operational descriptions. They become responsible only from the moment when they have explicitly accepted the visual separation. As a result, the actions at the nano level where the pilot analyzes the distance to the previous airplane must be considered in relation to the authority at the micro level. From the point of view of the authority pattern, the situation can be characterized as a delegation of the responsibility of the separation from the controller to the pilot. Indeed, the delegation pattern must be accompanied by a communication that allows

A Framework for Analysis of Shared Authority in Complex Socio-technical Systems

107

explicit delegation of responsibility. In order to reinforce this transfer, an evolution of the phraseology and/or recourse to dedicated HCI could take place at the meso level. Indeed, in the analysis framework that we use, we can consider that formalization of the delegation pattern must be decided at the meso level and that the effective delegation will take place during execution of the maneuver directly between the controller and the pilot at the micro level. In case a tool that helps in carrying out this task is introduced, the question of authority between humans and the system should be re-studied as a function of the local and organizational context in which the operation will be carried out. Nevertheless, even though the pilot is responsible for the separation and the controller is no longer responsible for this separation, the latter will maintain an interest in this separation in the context of a supervision task (the controller keeps an eye out for this). Because if it so happens that something goes wrong with the airplane under visual separation, even though it is no longer under their direct responsibility, there can be an impact on the actions related to the area under the responsibility of the controller and consequently, their accountability comes into play. 3.4.3. At the meso level At a meso level, the responsibility between the actors and the social entities can, for example, be related to the way in which airlines define their directives for pilots as a function of the globality of their mission. Directives of this kind could be related to management of assignments and to recommendations to maintain a larger separation than that judged acceptable by the pilot, or the airlines could even give advice about visual separations. Indeed, at this level, airlines can have the authority to take action to install additional safety measures, because airlines are responsible for putting in place a suitable system for management of the safety of their fleet in the air. 3.4.4. At the macro level Lastly, at a macro level, institutions define laws, for example regarding the fact of maintaining a certain separation from the preceding airplane as a function of their weight categories. Thus, institutions have a considerable impact on the accountability of actors. International institutions, such as the ICAO, have the authority to introduce new regulations, but it is down to national authorities to transform them into concrete actions. In addition, different rules depending on the European or North American context can be applied. For example, pilots in

108

Automation Challenges of Socio-technical Systems

the United States and in Europe will apply different procedures with regard to the fact that they must manage a potential go-around during visual separation. Along the same lines, the industrial sector, within the framework of researching aids to visual separation, is responsible for proposing technical solutions and also has the authority to carry out evaluations of the global efficiency of the solutions that they propose. 3.5. Conclusion This chapter has presented a global framework of analysis and systemic design of authority/responsibility/accountability in the new concepts of air traffic control. This analysis framework is described across four different societal/organizational levels. This approach can be integrated into the existing processes of engineering and human performance that have been established to construct the evolutions of European air traffic control, such as the human performance assessment process. The application of this approach requires integrated research studies dedicated to the relationships between authority, responsibility and accountability, which are currently often researched separately and from a systematic perspective. In order to examine in more detail the understanding of templates of authority/responsibility/accountability of future concepts, more advanced studies are required and in particular to give detail of the systemic relations between the patterns and the agents involved throughout the various levels. Methods of visualization of these relationships could in particular be optimized. Guides could also be written up in order to allow the application of this analysis framework by designers of air traffic control. Indeed, it is only through this type of approach that shortcomings could be discovered, anticipated and resolved during design or evolution of future air operations. 3.6. References [BAS 95] BASTIEN J.M.C., SCAPIN D.L., “Evaluating a user interface with ergonomic criteria”, International Journal of Human-Computer Interaction, vol. 7, pp. 105–121, 1995. [BHA 14] BHATTACHARYYA R.P., PRITCHETT A.R., “A computational study of autonomy and authority in air traffic control”, Digital Avionics Systems Conference (DASC), IEEE/AIAA 33rd, Colorado Springs, October 2014. [BIL 97] BILLLINGS C., Aviation Automation: The Search for a Human-centered Approach, Lawrence Erlbaum Associates, New Jersey, 1997.

A Framework for Analysis of Shared Authority in Complex Socio-technical Systems

109

[BOY 12] BOY G.A., GROTE G., “Authority issue in organisation automation”, in BOY G.A. (ed.), Handbook of Human-Machine Interaction: A Human-centered Design Approach, Ashgate, Farnham, 2012. [BRO 79] BRONFENBRENNER U., The Ecology of Human Development: Experiments by Nature and Design, Harvard University Press, Cambridge, MA, 1979. [CIL 98] CILLIERS P., Complexity and Postmodernism: Understanding Complex Systems Complexity, Routledge, London, 1998. [CUM 06] CUMMINGS M.L., “Automation and accountability in decision support system interface design”, Journal of Technology Studies, vol. 32, pp. 23–31, 2006. [FER 03] FERBER J., GUTKNECHT O., MICHEL F., “From agents to organisations: an organisational view of multi agent systems”, Agent-Oriented Software Engineering, LNCS 29035, 2003. [GOL 96] GOLLWITZER P.M., BARGH J.A. (eds), The Psychology of Action: Linking Cognition and Motivation to Behavior, Guilford Press, New York, 1996. [HAR 03] HARRISON M.D., JOHNSON P.D., WRIGHT P.C., “Relating the automation of functions in multiagent control systems to a system engineering representation”, in HOLLNAGEL E. (ed.), Handbook of Cognitive Task Design, CRC Press, Boca Raton, 2003. [HOC 00] HOC J.M., “From human-machine interaction to human-machine cooperation”, Ergonomics, vol. 43, no. 7, pp. 833–843, 2000. [HOC 01] HOC J.M., “Towards a cognitive approach to human-machine cooperation in dynamic situations”, International Journal of Human-Computer Studies, vol. 54, no. 4, pp. 509–540, 2001. [HOL 06] HOLLAND J.H., “Studying complex adaptive systems”, Journal of Systems Science and Complexity, vol. 19, no. 1, pp. 1–8, 2006. [LEP 83] LEPLAT J., HOC J.-M., “Tâche et activité dans l’analyse psychologique des situations”, Cahiers de Psychologie Cognitive, vol. 3, pp. 49–63, 1983. [MIL 07] MILLER C., PARASURAMAN R., “Designing for flexible interaction between humans and automation: delegation interfaces for supervisory control”, Human Factors, vol. 49, no. 1, pp. 57–75, 2007. [MOR 15] MORIN E., Introduction à la pensée complexe, Le Seuil, Paris, 2015. [SES 13a] SESAR 16.4.1 (D10), HP assessment process for projects in V1, V2 and V3, SESAR Joint Undertaking, Brussels, 2013. [SES 13b] SESAR 16.05.01 (D04), Guidance Material for HP Automation Support, SESAR Joint Undertaking, Brussels, 2013. [SHE 83] SHERIDAN T.B., VAMOS T., AIDA S., “Adapting automation to man, culture and society”, Automatica, vol. 19, no. 6, pp. 605–612, 1983.

110

Automation Challenges of Socio-technical Systems

[SHE 92] SHERIDAN T.B., Telerobotics, Automation, and Human Supervisory Control, MIT Press, Cambridge, MA, 1992. [SKY 13] SKYbrary, “Safety accountabilities and responsibilities”, October 2013, available at: www.skybrary.aero/index.php/Safety_Accountabilities_and_Responsibilities. [STR 08] STRAUSSBERGER S., LANTES J.-Y., MULLER G. et al., “A socio-cognitive descriptive modeling approach to represent authority distribution in ATM”, 7th EUROCONTROL Innovative ATM Research Workshop and Exhibition, Brussels, Belgium, December 2008. [USN 18] US NAVY, Student Guide of the Center for Naval Leadership, April 2018, available at: http://navybmr.com/study%20material/WCS_Leadership.pdf.

4 The Design of an Interface According to Principles of Transparency

4.1. Introduction From assembly operations in industry to transport domains, automation is causing profound changes to the paradigm of task performance. Now, technical agents are more and more involved in completing tasks, and their role can vary from the simple accompaniment of the human operator in completing specific tasks, to total and autonomous completion of other tasks. In recent years, driving an automobile has also been significantly modified by this automation, which can be partial or total [NAT 14]. Several prototypes of automated cars have already been tested in the United States and in Japan, as well as in Europe [MEY 14, TRI 14]. Promises about autonomous driving abound: improved road safety, reduced traffic congestion, more comfort for the human agent and improved mobility in a context of demographic change [MEY 14]. While the development of autonomous cars has been accelerated by technological advances in recent years, the “human factors” aspect must nevertheless be taken into account. Indeed, in her article “Ironies of automation”, Bainbridge [BAI 83] underlines what can be perceived as a sword of Damocles: a high degree of automation is seen as desirable because it would mitigate human failures, but, at the same time, we ask human agents to be able to take back control in order to manage difficult or unforeseen situations for which automatisms have not been designed. “Irony” resides in the fact that the more a system is automatized, the more the contribution made by the human operator becomes critical! Other problems, pointed out by authors such as Parasuraman, Sheridan and Wickens [PAR 00], can occur. In manual mode, the human agent is in charge of the entire task of driving. He/she provides perception of the driving Chapter written by Raïssa POKAM MEGUIA, Serge DEBERNARD, Christine CHAUVIN and Sabine LANGLOIS.

Automation Challenges of Socio-technical Systems, First Edition. Edited by Frédéric Vanderhaegen, Choubeila Maaoui, Mohamed Sallak and Denis Berdjag. © ISTE Ltd 2019. Published by ISTE Ltd and John Wiley & Sons, Inc.

112

Automation Challenges of Socio-technical Systems

environment and controls the actuators of his/her vehicle. With automation, the task of driving is no longer only provided by the human agent: the technical agent (or controller) will participate partially or totally in the completion of the task depending on their degree of automation. In cars that are totally autonomous or autonomous under particular conditions, the human agent will then be able to do other tasks that are not related to driving (Non-Driving Related Tasks or NDRT). Abandonment of the driving task, as allowed by automation, may have no major consequences in certain cases, but may also lead to fatal accidents in others. This was the case, for example, on May 7, 2016, when an accident involving a Tesla and a heavy goods vehicle caused the death of the driver [SOL 16]. On board the Tesla S model, the Autopilot system, which provides semi-autonomous functions, was in fact activated. This accident occurred at 119 km/h on a motorway in Florida. As specified by Bryan Thomas, spokesperson for the National Highway Traffic System Administration (NHTSA), “the enquiry has found no faults with the software” [NAT 14]. The human agent, sitting on the driving seat, is believed to have been distracted for unknown reasons. The “human factors” aspect therefore plays an important role in the design of autonomous cars, not only to guarantee the safety of the human agent, but also to guarantee acceptance of these cars by the public. This is the reason why this aspect is taken into account by most automobile manufacturers around the world, such as Volvo, Mercedes and BMW. In France, the Renault group is not to be outdone. They are interested in particular in eyes off/hands off systems in which the human agent no longer necessarily needs to look at the road or have their hands on the steering wheel. Supervision by the human agent is therefore no longer required here. In order to design interfaces that guarantee the level of security desired by the group for eyes off/hand off systems, Renault deploys projects both internally within the group as well as externally. For example, this is the case for the localization–augmented reality (LAR) project that the research we present in this chapter is a part of. This project involves designing interfaces that will make it easier to understand the operation of the system and will encourage (re)construction of the human operator’s situational awareness when taking back manual control in an SAE level 4 car. To do this, we have suggested principles of transparency as a means of directing the design, on the basis of Lyons’ models [LYO 13]. In order to evaluate the relevance of these principles, we have carried out experiments in a static driving simulator. In this chapter, we will first set out definitions of some notions. We will insist in particular on the notions of situational awareness and transparency which were previously introduced. Then, we will present the approach that we have used not only to define the principles of transparency but also to specify the content of interfaces.

The Design of an Interface According to Principles of Transparency

113

4.2. State of the art In this section, we seek to clarify the main terms used in our research: the situational awareness regarding the understanding that the human agent must have of their driving environment, and the transparency regarding the understanding that the human agent must have of the controller’s operation. 4.2.1. Situational awareness The situational awareness (SA) of an individual, as defined by Endsley [END 95], is made up of three levels: – level 1: perception of the elements in the environment. At this level, the information is not interpreted. Information is simply received in a raw, unprocessed format. This level contains information about the states, attributes and the dynamic of the elements that are relevant to the environment; – level 2: understanding of the current situation. This level intervenes after perception, as soon as the data can be integrated into existing frameworks. Information processing, which implies understanding, consists of putting the characteristics of the ongoing situation into correspondence with frameworks in the memory, which represent prototype situations. This stage of understanding is important in apprehending the significance of the elements seen at the previous level, by means of an organized image of the situation, in relation to the objectives; – level 3: projection of future states. This level is associated with the ability to anticipate, meaning to project the states of the elements perceived and understood, into the near future. The exactitude of the prediction is strongly correlated to the exactitude of the two preceding levels. This is the level for forecasting the future state of the perceived situation. In a general manner, research literature mentions two main states of SA: – it can be insufficient: in this case, human agents do not manage to construct an exact and complete representation of their environment, which is necessary to achieve the objectives that are assigned to them. This can therefore have consequences on the performances obtained; – it can be sufficient: SA is described as sufficient when a human agent perceives and correctly interprets information in the environment in which they find themselves. Thus, they can react in accordance with any unforeseen event, to restrict or cancel all the damage that this event may have brought about. This is the optimal state.

114

Automation Challenges of Socio-technical Systems

Sufficient SA of the human agent is one of the major challenges that “design for SA” (termed design for situational awareness by Endsley [END 16]), wishes to tackle. This is also the challenge that we wish to take on in our work. In addition to the SA, another selected objective is transparency of the controller.

4.2.2. Transparency The Larousse dictionary proposes five definitions of the word “transparent”: – definition 1: describes a body that transmits light by refraction and through which the objects can be seen with clarity; – definition 2: describes a material that allows light to pass through it; – definition 3: describes a cloth, paper or skin that is thin enough so that we can see through it; – definition 4: with reasons or direction that are easy to guess; – definition 5: with a clear functionality that we do not seek to hide from opinion. These definitions allow us to identify two broad approaches. In approach 1, definitions 1, 2, 3 indicate that an object or system is transparent if it is possible to see through it. In approach 2, relative to definitions 4 and 5, an object or system is transparent if we easily understand its operation. Even though these two approaches appear to be similar, they are functionally very different [OSO 14]. While approach 1 manages the aspect of form, approach 2 manages the aspect of foundation and we have given preference to the latter in our research. Following this approach, transparent systems communicate information that allows how they function and/or what they do to be understood. In research literature about the interaction between a human agent and an automated system, several authors define the transparency of a system according to this characteristic. For example, Cring and Lenfestey [CRI 09] have suggested that transparency refers to the perceived predictability and the understanding that the human agent has of a specific behavior of automation. Kim and Hinds [KIM 06], for their part, specify that if the human agents do not understand the system’s operational logic, normal and even routine behaviors of the latter can then be perceived as errors by human agents. However, even though a high level of transparency (revealing several pieces of information about operation of the controller) is in general

The Design of an Interface According to Principles of Transparency

115

preferred1 by human agents [SAN 14], too great a quantity of information can lead to an information overload and thus affect levels 1 and 2 of the SA. Some authors have worked on transparency of the controller in the context of driving automobiles. For example, to ensure a successful takeover, Eriksson and Stanton [ERI 15] have proposed that Grice’s [GRI 75] four maxims should be used to increase the transparency of autonomous cars. The following feature among these maxims: – the quantity maxim which stipulates that no unnecessary information must be added to the interface; – the quality maxim which stipulates that all the information sent by the automated system must be true; – the relational maxim which stipulates that all the information must be contextualized and pertinent with respect to the task that the controller is in the process of carrying out; – the manner maxim which stipulates that ambiguity must be avoided and that the information must be presented in a structured and brief manner. Eriksson and Stanton [ERI 15] thus suggest that by using these maxims in the interface design of a highly automatized car, a good mental representation of the controller’s operation will be produced in the human agent, and thus the transition phase between the automatic mode and the manual mode will be facilitated. Naujoks, Forster, Wiedmann and Neukum [NAU 16] looked at the transparency of highly automatized cars involving communication about future maneuvers. Thanks to their results it has been determined that the vocal method made it easier to extract information that is relevant for understanding the controller’s intentions. While not much of this research has been carried out, there are models of transparency that exist and which are proposed by certain authors, which can be applied to automobile driving. Among these models there is, for example, the model of Situation Awareness-based Agent Transparency and Lyons’ models, which are the one that we have used (see Figure 4.1). Lyons looked at the question of transparency from two angles: the transparency of a robot for humans and the transparency of humans for robots; where here the robot refers back to an automated system. The transparency of the robot makes reference to the set of information that the robot must communicate to the human [LYO 13].

1 Preference is an emotive factor subject to a degree of confidence and related to the human agent [HOF 15].

116

Automation Challenges of Socio-technical Systems

Human Agent Model

objectives

Driver Teamwork Model

Vehicle

Intention model

objectives

Technical Agent

Environment model

Output

Task model

Analytical model

Figure 4.1. Lyons’ models according to Debernard et al. [DEB 16]

This transparency is based on various models: the intention model, the task model, the analytical model and the environmental model. – The intention model: here the intention refers to the general objective for which the robot was designed, or even to its overall objective. At this stage, two questions arise: what is the robot’s “raison d’être”? Macroscopically, what functions is it able to carry out? To distinguish “the intention in action” from “the intention in a general sense” defined during the design phase, it appeared more judicious to rename this model “the overall objective model” (of the robot). In the following, in order to avoid any ambiguity with an intended maneuver, we will refer to a “model of the general objective”. – The task model: Lyons specifies that the human agent analyzes the robot’s actions in line with a specific cognitive schema2. The task model therefore aims to provide details to help the human agent establish this cognitive schema. In particular, it must contain information that makes it easier to understand a given task, information regarding the robot’s objective at a given instant in time, regarding the robot’s advancement with respect to its objective, its advancement relative to the tasks that can be carried out by the robot and knowledge of the errors that could occur while the task is being carried out. These various pieces of information allow a 2 A cognitive schema refers to a structure contained in the long-term memory. It needs to be differentiated from the mental representation, which is a structure that is available in the short-term memory and is constructed on the basis of available and relevant schemata in a given situation.

The Design of an Interface According to Principles of Transparency

117

shared representation to be established, between the robot and a human agent, of actions that must be accomplished for a given task. Communication of the robot’s intention in relation to the objective that it wants to achieve allows the human agent to find out the state of advancement of the task as well as the reason why it carries out a given action or adopts a given behavior. Lyons suggests that communicating this information improves the human agent’s representation of the robot and also helps to improve surveillance of the latter. – The analytical model: Lyons specifies that to achieve the objective that is assigned to it during its design, a robot must acquire and analyze a large quantity of data. Given the complexity of this information, the human agent may have difficulties understanding how the robot makes its decisions. The analytical model therefore aims to communicate to the human agent the analytical principles used by the robot during decision-making. This is very useful, in particular in complex situations in which the level of uncertainty is high. – The environment model: in difficult and potentially hostile conditions in which time constraints are high (such as military situations), it is essential that robotic systems operate with a dynamic that is in sync with their environment. The robot must communicate to the human agent the understanding that it has of topographical variations, meteorological conditions, threats and time constraints in a particular environment. This type of information indicates to the human agent that which the robot exactly perceives and this will improve the SA that the human agent has of its environment. Moreover, knowing that the robot knows the environmental conditions will help the human agent calibrate the confidence that can be attributed to it. In addition to the transparency of the robot for the human agent, the transparency of the human agent for the robot must be taken into account. This transparency refers to the set of information that the robot can integrate concerning the state of the human agent, as well as the information shared between the two decision-makers to manage and understand the sharing of work [LYO 13]. This transparency is therefore based on two models. – The teamwork model: the robot and the human agent form a team in which each must be able to clearly identify the objectives that are assigned to them. In this model, which we have named the “cooperation model”, the automation must indicate to the human agents the tasks that they are responsible for, the tasks for which the human agent is responsible and at what level of autonomy they operate. This information will allow the human agent to predict the actions of the automation. – The human agent model: this is the model that will allow the state of the human agent to be analyzed. Thanks to this model, the robot may be capable of

118

Automation Challenges of Socio-technical Systems

diagnosing the level of stress, cognitive overload, etc. of the human agent. If the robot identifies cognitive overload in the human agent, Lyons suggests that it responds with an increase in the degree of automation for as long as this state is observed and particularly when it is carrying out critical tasks. Although structuring in sub-models of Lyons’ model is noticeable, the author does not specify, among all this information, which the most important are. Indeed, to cooperate, the agents must understand and share a set of information that constitutes the common frame of reference [DEB 09]. This information comes from exchanges at the “information gathering”, “information analysis”, “decision-making” and “action implementation” levels [PAR 00]. Moreover, Lyons’ models do not specify in which situation such or such information must be presented. Is this in all situations or in abnormal situations? In other words, they do not manage the question of information prioritization, considering the visual load or the context for example. Taking into account the four functions of Parasuraman and his colleagues [PAR 00] seems important in order to answer these questions and to complete the structuring proposed by Lyons’ models. 4.3. Design of a transparent HCI for autonomous vehicles 4.3.1. Presentation of the approach Since our research has been carried out with a view to cooperation between human agents and autonomous vehicles, we have paid particular attention to establishing a common frame of reference between the two agents. This is the reason why we have transposed Lyons’ models [LYO 13] into the driving domain, using the function proposed by Parasuraman et al. [PAR 00]. This has enabled several principles of transparency to be distinguished in order to connect them with each of these functions. In order to obtain solid information to display, we identified the information requirements of the human agent in manual mode thanks to the first two stages of cognitive work analysis. These were then structured according to Michon’s three levels of control [MIC 85]: a strategic level, a tactical level and an operational level, taking the various display possibilities into account. Once the question of information content was dealt with, creative sessions took place to direct the way this could be presented in augmented reality. In order to direct the display of information as a function of the various available displays and the driving context, rules of display – also known as “rules of supervision” – were defined for the human–computer interactions (HCI) supervisor. Lastly, an experimental validation was designed in order to validate certain principles, since the time available was too restricted in order for them all to be validated. In the following section, we will present the way in which the principles of transparency have been defined.

The Design of an Interface According to Principles of Transparency

119

4.3.2. Definition of the principles of transparency The school of thought that is advocating human–computer cooperation considers that the human agent and the technical agent must operate in a team, meaning by cooperating to have the best possible performance, while facilitating the work of the other. A common frame of reference must therefore be maintained and/or constructed between these agents, where this frame of reference can be supported by an HCI, which corresponds to what is known as common work space. This CWS can be composed of several attributes such as [DEB 06]: – the formulation of information arising from activities of information acquisition; – the formulation of problems that arise from diagnosis activities; – the formulation of strategies that arise from schematic decision-making activities; – the formulation of solutions that arise from precise decision-making activities; – the formulation of instructions that arise from solution implementation activities. In Lyons’ models, there is no clearly explained correlation between each sub-model and the four functions of information processing in Parasuraman et al. [PAR 00] or the attributes of the CWS, about which we have just given a reminder. Thus, in our work, direct connections between the principles of transparency arising from Lyons’ models, and these functions, will be established to supply this space. Moreover, to ensure complete consistency of the approach and to take into account our research topic of autonomous driving, when it becomes possible we will establish relationships between each principle and Michon’s model [MIC 85] which defines automobile driving using three levels: strategic, tactical and operational. In order to have an analysis matrix, we have therefore paired the functions of Parasuraman et al. [PAR 00] and Michon’s levels [MIC 85]. Table 4.1 presents this matrix which we have named the IPLC-based matrix, meaning Information Processing and Level of Control-based matrix, that is, a matrix based on information processing and the levels of control.

120

Automation Challenges of Socio-technical Systems

I.T

I.A

D.M

A.I

S T O Table 4.1. IPLC-based matrix

In Table 4.1, the letters S, T and O refer respectively to the strategic, tactical and operational levels. Concerning the abbreviations I.T, I.A, D.M and A.I, they refer respectively to the functions “information gathering”, “information analysis”, “decision-making” and “action implementation”. We have defined 12 principles on the basis of Lyons’ models, concerning the LAR project in which the main functions of the controller have been to carry out a change of lane, to adjust its speed and its distance as a function of the speed and the distance between the autonomous vehicle and the surrounding vehicles. For each of them, we have established, where possible, the connection with the IPLC matrix by putting 1 in the corresponding box. These principles are represented in Figure 4.2. PC1 PC2 PC3

objectives

Driver

Teamwork Model

PO1 PO2 PO3

objectives

PE1 PE2

Output

Vehicle

Intention model

Task model

Technical Agent

Environment model

Analytical model

PT1 PT2 PT3

PA1 PA2

Figure 4.2. The transparency principles associated with Lyons’ models

While the principles that we are going to describe are general and are correlated with Michon’s three levels of control; no particular attention was paid at the strategic level because it is outside the field of investigation of the LAR project. For each of Lyons’ sub-models, we are going to give an example of a defined principle.

The Design of an Interface According to Principles of Transparency

121

4.3.2.1. Principle from the general objective model As we have previously mentioned, this model refers to the objective for which the robot was designed or even to its overall objective. In other words, in this model, it is a question of communicating to the human agent the “system’s raison d’être”. In autonomous driving, the essential information for the human agent to know is what the autonomous car is able to do and in what context. In the context of the LAR project, the following elements have been specified to characterize the autonomous vehicle under consideration: – automation at level 4 is a restricted level of automatization; – the car allows the human agent to delegate all driving functions to the controllers. Since these functions are critical (in terms of safety), the autonomous mode can only be engaged in certain traffic and environmental conditions. In the context of the LAR project, the car can be in an autonomous mode if the road markers are highly visible and the atmospheric conditions are good (no heavy rain or snow); – the human agent can count on the controller, on the one hand, to survey the changes in conditions of use of the autonomous mode, and on the other hand, to give back control (return to manual mode); – the human agent must be available to carry out occasional controls knowing that a comfortable transition time will be attributed to it before the return to manual mode (between five and seven seconds [GOL 14, LOU 15]); – the car is designed to carry out, in total safety, the tasks of driving and maneuvers that arise during the autonomous mode. It is essential that the human agent precisely knows all these conditions of use before being able to engage the autonomous mode. Similarly, if the human agent is not informed that the autonomous car is in a position to carry out lane changes in total security, he/she could be surprised in this kind of situation that is initiated by the controller and could subsequently deactivate the autonomous mode. In a general manner, we have therefore defined the O1 principle which stipulates that: “The driver must know what the maximum degree of autonomy of the car is, as well as the conditions of use of this level of autonomy. These conditions must be clearly identifiable so that the driver can easily make the link between these and the activation of the autonomous mode”. This principle does not have any elements in the IPLC matrix.

122

Automation Challenges of Socio-technical Systems

4.3.2.2. Principle from the task model The task model contains information about the robot’s objective at a given moment in time. In concrete terms, in autonomous car driving, the human agent must know at a given instant if the vehicle is in the process of carrying out a lane change (to the left or to the right) or if it is continuing in the same lane. This dimension of an ongoing action relates to the operational level from the driving activity model proposed by Michon [MIC 85] and the “action implementation” function of the model by Parasuraman et al. [PAR 00]. This maneuver will have to take place in compliance with formal rules of driving. In order to find out if a rule is being followed, it is first necessary to find out what rule must be followed. For example, the human agent can know that the autonomous car has exceeded a maximum speed of 90 km/h, only if it is known that the maximum speed is in fact 90 km/h. In other words, the “information analysis” function at an operational level is necessary. As a result, we have proposed the T1 principle which states: “In the established autonomous driving mode, the driver must be informed that the controller controls the car in compliance with the applicable rules and good practices of driving (predictability of behavior of the car). Moreover, the driver must be capable of detecting actions (change of lane or staying in lane, change in speed) that the car is carrying out and understanding them”. The T1 principle therefore leads to presentation to the driver of the necessary information that corresponds to the operation/information analysis level and to the operational/implementation level of the action in the IPLC matrix (see Table 4.2). For elements that relate to the operational/implementation level of the action, the driver must know that lateral and longitudinal controls are correctly provided for. These controls constitute the basic requirements of any car and therefore of autonomous cars in particular. Since lateral and longitudinal control is continuous, it seemed to be relevant only to present the changes decided on by the controller with respect to the stable situation “stay in lane and maintain speed”. Thus, at each instant in time, one of the nine following situations determined by the controller will be presented to the driver: – continue in lane and accelerate; – continue in lane and decelerate; – continue in lane and maintain speed; – change of lane to the right and accelerate; – change of lane to the right and decelerate;

The Design of an Interface According to Principles of Transparency

123

– change of lane to the right and maintain speed; – change of lane to the left and accelerate; – change of lane to the left and decelerate; – change of lane to the left and maintain speed. I.T

I.A

D.M

A.I

S T 1

O

1

Table 4.2. IPLC matrix for the T1 principle

4.3.2.3. Principle from the analytical model The analytical model aims to communicate to the human agent the “reasoning mechanisms” used by the robot to decide. The autonomous system must be in a position to explain the decision-making strategies if the human agent asks for them. In car driving, the human agent must know how the lane change needs to take place, and in particular know of the constraints that are taken into account, like other traffic vehicles for example. As a result, the A1 principle stipulates that: “In the established autonomous mode, the driver must know how each maneuver is carried out”. The information to display to the driver that corresponds to the A1 principle are related to “decision-making” in the IPLC matrix, on both the tactical and operational levels (see Table 4.3). The A1 principle echoes that of Billings [BIL 96] which stipulates that “the autonomous system that controls air traffic must carry out tasks in a manner that is comprehensible to controllers”. I.T

I.A

D.M

A.I

S T

1

O

1

Table 4.3. IPLC matrix for the A1 principle

124

Automation Challenges of Socio-technical Systems

4.3.2.4. Principle from the environment model In the environment model, the robot must communicate to the human agent the understanding it has of the site topography, the weather conditions and the time constraints in a particular environment. In an autonomous car, the sensors constitute the primary interface with the external environment. They detect the dynamic or static entities in place in that environment. The sensors provide input data that is going to cause specific behavior of the autonomous car that the human agent must understand. Access to this information by the driver allows him/her to verify that the sensors function correctly and that he/she has the same representation of the environment as the vehicle. The E1 principle indicates that: “In the established autonomous mode, the driver must have sufficient perception of what the autonomous car perceives in order to carry out his/her analyses and make his/her decisions. It must be ensured that the autonomous vehicle has the necessary information to make that decision”. 4.3.2.5. Principle from the cooperation model Lyons stipulates that each agent must clearly identify the tasks that are assigned to it. In an SAE level 4 car, two modes of driving are possible: the autonomous mode during which the controller is in charge of driving and the manual mode during which the human agent takes care of the various controls required. It is therefore critical for the human agent to know what the active mode is. Indeed, a confusion of modes can lead to takeover of control in bad quality conditions. In the best case, taking back control takes place late on and can cause an incident; in the worst cases, a confusion of mode could cause an accident. We have therefore laid out the C1 principle which states: “The driver must know which mode is activated at each moment in time in order to avoid all mode confusion”. There are no elements in the IPLC matrix that are able to express this principle. 4.3.2.6. Principle from the human agent model The human agent model suggests that the robot carries out an analysis of the human agent from the emotional, cognitive and even physical point of view. The robot would then be able to assess a human’s state of stress, cognitive overload or any other state. We have therefore deduced the H1 principle which states that “the controller must survey the state of the driver and understand it to adapt the information displayed if necessary”.

The Design of an Interface According to Principles of Transparency

125

There are no elements in the IPLC matrix that are able to express this principle. In addition, in the LAR project, there is no driver monitoring that would allow us to observe and analyze the human agent. Consequently, the fact that the driver is ready to take over is ascertained mainly by the fact that he/she has correctly placed his/her hands on the steering wheel and his/her foot on the pedal. Ideally, adjustment and “prioritization” of the information depending on the position of the human agent’s gaze or their workload would lead to an adaptive interface. These principles of transparency that are applied to autonomous driving do not specify the data and the tangible information to be represented on the HCI to make the controller comprehensible. To define this information, we have resorted to cognitive work analysis. 4.3.3. Cognitive work analysis Cognitive work analysis (CWA) is a methodology that is typically used to design interfaces described as ecological. These interfaces have the objective of making the constraints and the possibilities of actions visible in a given field of work and facilitating the use of a low-cost cognitive control method based on automations (skill-based) or on rules (rule-based) rather than on knowledge (knowledge-based). Their primary objective is to help the operators face new situations in complex socio-technical systems [VIC 02]. Their development relies on three stages of the CWA: work domain analysis, task analysis and competencies analysis. The ecological interfaces were initially designed to support control activities for processes (nuclear, petrochemical) or driving. For a long time, they have also been developed to facilitate the supervision of autonomous systems: inhabited submarine vehicles [KIL 14] as well as autonomous vehicles [REV 18]. In the context of the supervision of an autonomous vehicle, these interfaces have different aims depending on the phase to which they are addressed: takeover phase or autonomous driving phase. In the first case (documented by Revell et al. [REV 18]), they are designed to make it easier to manage an unexpected situation, whereas in the second case (documented by Kilgore and Voshell [KIL 14]), they have the primary objective of making it easier to understand the operation of the system and thus join with the objective of transparency. Our work comes closer to the study by Kilgore and Voshell, since it relates to the autonomous driving phase. The takeover phase, to be carried out correctly, requires the human agent to have a correct mental representation of the environment and of the behavior of the autonomous agent. In the following sections, we present the results from two analyses that have been carried out: analysis of the field of work and analysis of the control task.

126

Automation Challenges of Socio-technical Systems

4.3.3.1. Work domain analysis Work domain analysis was proposed by Rasmussen [RAS 86]. Due to the formative nature of cognitive work analysis, work domain analysis focuses on the constraints related to the safety and performance of a complex system rather than considering specific scenarios [NAI 01, NAI 13]. These constraints can be represented by an abstraction hierarchy (AH) that includes five levels which are, in decreasing order of abstraction: domain purpose, values and measures of priority, functions associated with the purpose (or general functions), processes associated with objects (or physical functions) and physical objects. The links between the levels in the abstraction hierarchy express “means–ends” relationships, also known as “how–why” relationships. Effectively, the connections between a target function and the lower levels of abstraction indicate how a function is operationalized. Inversely, the links between a target function and the higher levels of abstraction indicate why this function exists. 4.3.3.1.1. Abstraction hierarchy An abstraction hierarchy has already been implemented in the field of car driving. Several research publications have looked at this topic (see, for example, [SAL 07]). Concerning our issue, in Figure 4.3, we present an extract of the analyses made using many literature articles such as [JEN 08], interviews with LAR project collaborators and several iterations of which the first are consultable in [POK 15a, POK 15b]. In Figure 4.3, the information is distributed over several levels: – domain purpose: the system’s “raison d’être”. In the case of the task of driving, the identified purposes are: road transport that is effective, comfortable and completely safe; – values and priority measures: these correspond to the principles, priorities and values to follow in order to achieve the domain purposes. In our case, it is a case of complying with the defined itinerary, the separation distances between vehicles, speed limits and avoiding collisions between vehicles; – functions related to purposes: also known as general functions, they represent all the main functions that the system must carry out. We have identified five functions, including the function “monitoring vehicle maneuvers in the driving environment”, for example; –functions related to objects: these functions, also known as physical functions, correspond to functions supporting general functions. For example, the function “detect and recognize speed panels” has been highlighted;

The Design of an Interface According to Principles of Transparency

127

– physical objects: these are sensors, mechanical parts and/or algorithms that guarantee the functions that relate to the objects. For example, navigation systems. 4.3.3.1.2. The definition of the information requirements for driving using work domain analysis We have modeled the abstraction hierarchy (AH) for the driving domain independently from the agent that carries it out, whether a technical or human agent. As Li and Burns [LI 17] have specified, a function in the AH can be the responsibility of the technical agent, the human agent or both (shared control). In other words, the AH can be a basis for the allocation of functions. In their research, Li and Burns have used the AH in the field of financial transactions in order to highlight two scenarios of function allocation, one that represents low automation and the other high automation [LI 17]. These authors have also specified that pointing out the function relating to objects (physical functions) on an interface allows the losses of situational awareness in the human agent to be restricted. This result is of particular interest, because it allows us to identify the potential information that will then lead to the definition of the information to be presented effectively in order to maintain/establish sufficient situational awareness of the human agent. This identification is carried out from physical functions determined in the abstraction hierarchy. The speed panels and the action of changing lanes constitute examples of potential information. Work domain analysis has allowed us to define the general information requirements of the driving task, without concentrating on particular situations. However, given that our research attributes particular attention to the maneuver of changing lanes, it is necessary to identify information requirements relating to this maneuver in a specific manner. Therefore, we have carried out the analysis of this maneuver by using the control task analysis (ConTA), in order to identify the information-processing activities performed by the human agent. We present this analysis in the following section.

Figure 4.3. Extract of the analysis of the field of work. The blue lines show some relationships that exist between the boxes from one level to the next

128 Automation Challenges of Socio-technical Systems

Figure 4.4. Rasmussen’s double scale for changes of lane

The Design of an Interface According to Principles of Transparency

129

130

Automation Challenges of Socio-technical Systems

4.3.3.2. Analysis of the control task Analysis of the control task identifies the needs associated with the known operational modes of a system. One of the tools used by the ConTA is the “decision ladder”, which presents the various stages of diagnosis and decisionmaking activities in terms of steps of information processing and intermediate states of knowledge. The stages do not necessarily align linearly. An agent (human or technical) can take shortcuts. For example, perception of a signal can imply an immediate corrective action without going through the intermediate stages of information analysis. Consequently, certain stages can be short-circuited. In our research, we have developed a control task analysis of the lane change as it is carried out by a human agent. However, we need to remind ourselves that in the LAR project, the car undertakes the lane change in an entirely autonomous manner. Our approach is therefore based on the fundamental hypothesis that the human agent will need the technical agent to send it information similar to that which it would have required if it had personally initiated the action. 4.3.3.2.1. The decision ladder The methodology is similar to that which was previously described. In effect, we focused on the lane change maneuver by consulting several articles, in particular [OLS 03, LEE 04 and NAR 08]. The analysis has been completed thanks to interviews with Renault employees. The various construction stages of the “decision ladder” have been carried out using the Rasmussen terminology [RAS 86]. In the context of our research, we have focused on high-speed roads in accordance with the specifications of the LAR project. Figure 4.4 presents an extract of our results. In Figure 4.4, the circles that represent the exits in terms of information are as follows: – the objective: in a lane change, the objective is to go from a lane of origin to a destination lane (change of lane) in a safe and efficient manner, taking account of the time windows and navigational constraints; – the alert: these are questions to which positive answers are going to bring about a lane change maneuver. An identified alert is: “is it necessary to go onto a motorway?”; – set of observations: we have identified nine types of information, among which is “what is the longitudinal and lateral position of the ego-vehicle?”;

The Design of an Interface According to Principles of Transparency

131

– state of the system: we have identified six states, including: “what are the intentions of the neighboring vehicles?”; – the options: four questions can be asked, among which: “is it possible to make a lane change whilst complying with speed limits?”; – the chosen objective: in our case, since the objective is unique, it is necessarily chosen. It is therefore a case of going from an original lane to a destination lane (change of lane) in a safe and efficient manner, taking into account the time windows and navigational constraints; – the target state: we have identified four target states, including: “the lane change must be carried out in compliance with the speed limits”; – the task: in our case, the task to be carried out is related to the trajectory that the vehicle must follow; – the procedure: we have defined five successive actions/controls that include: “activating the indicator to indicate the beginning of the lane change to other users”. 4.3.3.2.2. Definition of the information requirements for the lane change, using control task analysis In order to put the results of our analysis in relation with the common work space that must be established between the human agent and the controller, we have divided the Rasmussen “decision ladder” into four regions that correspond to the four functions of information processing that can be automatized [PAR 00]: “information gathering”, “information analysis”, “decision-making” and “action implementation”. This approach has already been used by Li and Burns [LI 17]: – the alert and set of information come from the function “information gathering”; – the state of the system arises from the function “information analysis”; – the options, the chosen objective and the target state come from the “decision-making” function; – the task and the procedure come from the “action implementation” function. The information requirements are derived directly from this “decision ladder”. For example: – for the “information gathering” function, the identified alert “it is necessary to return to the preferential lane?” imposes communication of this alert on the human operator; – for the function “information analysis”, the state of the situation “what are the intentions of the other vehicles?” requires these intentions to be communicated;

132

Automation Challenges of Socio-technical Systems

– for the “decision-making” function, the decision “the lane change must be rapid” requires communication of the decision to make a quick lane change; – for the function “action implementation”, the action “stabilizing the ego-vehicle in the destination lane by deceleration” obliges this deceleration to be transferred. Thus, we have extracted the information requirements function by function, and recorded them. This analysis (ConTA), coupled with the previous one, allows the majority of information requirements to be defined for the interface of the autonomous car. We talk about a majority because the information requirements regarding cooperation between the controller and the human agent do not appear in this analysis. The CWA allows these requirements to be defined in the following stages, in particular in the SOCA (social organization and cooperation analysis). In the context of the LAR project, allocation of the functions was set up from the beginning: the controller carries out all the maneuvers in autonomous mode and the human agent must take over to control the vehicle in manual mode. Owing to this, the information requirements that arise from controller-human agent cooperation can be directly defined on the basis of the LAR project specifications and the principles of transparency that arise from the model of transparency. In the following sections, we present the experimental protocol implemented to validate these principles. 4.4. Experimental protocol 4.4.1. Interfaces The objective of our work is to validate or invalidate the principles of transparency that are defined in the context of the use of an autonomous car in level 4 of the SAE taxonomy and, consequently, to evaluate the optimal level of transparency for an HUD interface that is specific to the LAR project. Therefore, we have decided to activate or deactivate the display of information associated with these principles following the four information processing functions in Parasuraman et al. [PAR 00]: “information gathering”, “information analysis”, “decision-making” and “action implementation”. A total of 16 interfaces can thus be defined. However, we have only focused on five interfaces among the 16 for two main reasons: – time constraints: since human resources related to the LAR project are limited and time is limited, it was not possible to develop each of these 16 HCIs;

The Design of an Interface According to Principles of Transparency

133

– financial constraints: having 16 HCIs to evaluate requires a large number of participants to be available in order to validate them. Table 4.4 sets out the functions present in each of the HCIs that have been selected and developed. In emphasizes basic information; Iw emphasizes information related to information gathering and action implementation; Ii is like Iw with the additional information related to situation analysis; It emphasizes information related to information gathering, information analysis and decision-making; Is presents all information.

In Iw Ii It Is

Collection of information 0 1 1 1 1

Analysis of the information 0 0 1 1 1

Decisionmaking 0 0 0 1 1

Implementation of the action 0 1 1 0 1

Table 4.4. Specifications of the five HCIs according to the functions of [PAR 00]. “1” means that the information relating to the function are present is “0” means the information is absent

In does not contain any information from the principles of transparency in relation to the functions defined by Parasuraman et al. [PAR 00], as shown in Table 4.4. However, it is not lacking in information. Indeed, we have identified the basic information that makes up this interface and that makes it the reference interface. Identified on the basis of the analyses of interfaces of recent vehicles in circulation and certain prototypes of autonomous cars, this information includes in particular: – the instantaneous speed of the autonomous vehicle; – the speed limit defined for the section in which the autonomous car is located; – an indication that the autonomous mode is active; – the duration of the autonomous mode before entering a zone that requires a takeover; – navigation: this is not as dynamic as it is with a GPS. It provides an insight in the form of a map of the autonomous car’s journey, with an indication of the local journey to be taken in the event of a motorway split or exit from the motorway;

134

Automation Challenges of Socio-technical Systems

– the blind spot: this indicates that another vehicle is present outside of the autonomous car’s range of vision. Concerning the HCI evaluations, they will be based on the differences observed during data analysis. Thus: – the difference between Iw and Ii will allow the added value of information from the “information analysis” function to be evaluated; – the added value of information from the “decision-making” function to be evaluated; – the difference between Is and It will allow the added value of information from the “action implementation” function to be evaluated. The various symbols and annotations used for these interfaces have been defined after several creative sessions. Some of them are presented in [POK 16]. 4.4.2. Hypotheses During manual driving, the human agents construct a representation of the situation (this is situational awareness) on the basis of elements relating to their perception, their understanding and the projection that they make of it. This representation must be relevant, meaning that it must allow them to acquire the right information at the right moment in time. In the context of the autonomous car, the HCI that we have defined all aim to facilitate perception but in addition, for some of them, understanding of the situation and projection of its state into a near future, such as the fact, for example, that the autonomous car is going to take an exit. Therefore, we make the hypothesis that the level of transparency of the HCI’s is going to have an impact on the SA of the drivers (hypothesis 1). Moreover, we also make the hypothesis that the least transparent HCI (HCI 1) will not be the most appreciated of the human agents and that the latter will have a preference for interfaces that clearly show the actions that the controller carried out or is going to carry out (hypothesis 2). 4.4.3. Participants A total of 45 people, ranked as per the distribution in Table 4.5, have taken part in the experiment that took place in the DrSIHMI3 simulator. The average age of the 23 women was 43 years with a standard deviation of 9.23; the age for the men was 43.5 years with a standard deviation of 9.53. The duration in years for which they have held a license was 22.4 with a standard deviation of 10.33. 3 Driving Simulator for Human Machine Interface.

The Design of an Interface According to Principles of Transparency

Sex Female Male

Age group (in years)

Number of people

25–44

13

45–65

10

25–44

11

45–65

11

135

Table 4.5. Composition of the participants according to the criteria “sex” and “age”

For each participant, the test lasted for approximately two hours, with effectively 40 minutes of driving time. In a few cases, the two hours were exceeded, in particular when the participant spent more time in debriefing interviews carried out after the tests. In any case, we set the maximum duration of the test at two hours and thirty minutes at the time when the study started for each participant in order to avoid any tiredness. This limit was complied with. 4.4.4. Equipment The study was carried out in the simulator at the Technology Research Institute System X (IRT SystemX), the DrSIHMI (Figure 4.5). This static simulator is made up of the following elements: – a car driving seat: this includes a steering wheel, the accelerator, brake and clutch pedals, a gear stick, a button box, a non-AR HUD, an AR-HUD and a remote display: - the button box allows the driver to show the degree of discomfort felt during a situation, with buttons 1 and 2, - the non-AR HUD: given that DrSIHMI does not have a physical dashboard, this display, with dimensions 15*4°, allows classic information to be displayed such as the speed of the ego-car, - the AR HUD: with dimensions 15*5°, this display allows virtual information to be projected in the driving environment for the perception of augmented-reality elements, - the remote screen contains information about the strategic level. This screen presents an image of the journey for the autonomous vehicle to follow, as well as the distance to the arrival point; – a curved projection screen with an opening of 180°.

136

Automation Challenges of Socio-technical Systems

Figure 4.5. DrSIHMI

In this equipment context, and in order to carry out the experiments, several scenarios have been developed. 4.4.5. Driving scenarios A driving scenario sets out a specification of all the events that can occur in the environment. These are, in particular, traffic vehicle maneuvers, modification of the road topology (e.g. bend or straight road) or weather conditions. In the context of our work, the scenarios have been developed by specific technical resources on the basis of specifications that we have provided, thanks to the software SCANeR. Developed by OKTAL SA, SCANeR models scenarios use graphics involving “condition-action” pairs (“if…, then”) in the driving environment [REY 00]. The driving scenarios are designed as a succession of driving scenes in a given environment or terrain. Figure 4.6 shows a bird’s eye view of the terrain used. We have defined several scenes: normal scenes such as the change of lane of the autonomous vehicle due to a separation of motorway, and less everyday scenes such as overtaking two heavy goods vehicles combined with the arrival of a police vehicle.

The Design of an Interface According to Principles of Transparency

137

Figure 4.6. Terrain used during the simulation. The journey in orange is the one that is effectively taken by the autonomous car in the simulation. For a color version of this figure, see: www.iste.co.uk/vanderhaegen/automation.zip

Before the beginning of each test, it was specified to each participant that the behavior of the autonomous car may not be homogeneous for the duration of the scenario. Effectively, we have specified that the autonomous vehicle may accelerate and brake abruptly, and that it may in certain circumstances reduce the security distances. This presentation phase, carried out on a computer, mainly aimed to comply with the principles arising from the model of the general objective4. These principles aim to make the human agent understand what the autonomous car is and what it can do. Varying the relative order of the scenes has allowed four different scenarios to be defined: – scenario 0: this lasted for three minutes and thirty seconds and constituted the learning scenario; – scenarios a, b and c: of an average duration of nine minutes, these similar scenarios (driving scenes that are identical but presented in a different order) made up the scenarios for each drive. They were associated with one of the five HCIs and submitted in a different order to each participant. HCI/scenario

Sa

Sb

Sc

In

15

15

15

Iw

8

8

8

Ii

8

7

7

Is

7

7

7

It

8

8

7

Table 4.6. Number of participants per HCI/scenario 4 This model reminds us of the system objective.

138

Automation Challenges of Socio-technical Systems

The numbers of participants for each interface and for each scenario are presented in Table 4.6. During each of these scenarios, various measures have been carried out. 4.4.6. Measured variables In the context of these experiments, we have collected data about the cognitive activities of the participants (information gathering and situational awareness) on the one hand, and about elements relating to the user experience (satisfaction) on the other hand. Table 4.7 presents all the variables studied as well as their possible values (for discrete quantitative variables) or their means (for qualitative variables). Type of variables Provoked independent variables

Dependent variables

Labels

Description

Interval of values or means

F

Freeze

1 (presence); 0 (absence)

DN

Driving number

1, 2 or 3

I

Interface

In, Iw, Ii, Is and It

S

Driving Scenario

a, b, c

G1 to G6

Answers to SAGAT questions

1 (correct); 0 (false)

FI

First Interface

In, Iw, Ii, Is and It

SI

Second Interface

In, Iw, Ii, Is and It

TI

Third Interface

In, Iw, Ii, Is and It

Table 4.7. Variables used in the experimental process

The dependent variables have been collected at different times: – answers to questionnaire Q1 regarding the situational awareness were collected only in the first driving number (DN 1), on one of the following two scenes: the first corresponds to an overtaking of a train of vehicles with a lane change of one of them; the other corresponds to an overtaking of two heavy goods vehicles with the arrival of a police vehicle. They aim to evaluate the situational awareness; – classification of the HCI into first, second and third positions was carried out at the end of the third experimental condition. This classification aims to evaluate the acceptability of the HCI. In the experiment validation, we have manipulated four independent variables or “factors”: the driving number (DN), the interface (I), the freeze situation (F) and the driving scenario (S). Moreover, there were control factors (or independent

The Design of an Interface According to Principles of Transparency

139

variables) from questionnaire Q0 about driving habits such as the age of participants (A) and their gender (G). 4.4.7. Statistical approach Given that several variables have been measured, these have all been considered through the bias of a multivariate approach, instead of carrying out univariate analyses several times. In the family of multivariate analysis tools, the classifications and regrouping techniques are well-known [JOH 07]. In the context of our research work, we have opted for the classifications because they present the advantage of “easily” showing the relations between the variables and the effects of each of the variables [BEN 92]. The presence of quantitative variables and qualitative variables has led us to use multiple correspondence analysis (MCA) in place of the principle component analysis (PCA) that is generally used [BEN 92]. If the MCA is used more in the analysis of qualitative variables, in particular in multiple choice questionnaires or enquiry data [BEN 92, CES 14, BIL 16], it is important to note that it leads, with quantitative variables coded in a fuzzy way, to a demonstration of nonlinear relations. Consequently, the models used in it are more complex than those of the PCA [BEN 92, LOS 14]. For a first analysis, the fuzzification model (FM) is used for arithmetical means and value intervals. The FM is less sensitive to extreme values (values that are sometimes erroneous) because the adjusted average is less sensitive to the abnormal values than the arithmetical average. Before going further, we will spend time on the following considerations: – consideration 1: to evaluate the impact factors, a summarizing operation is necessary. This summary requirement leads to a lower loss of information with local fenestration than without [LOS 01, SCH 15]; – consideration 2: there are other models of scale change but this offers good results (see [LOS 14] for comparative studies); – consideration 3: each histogram and its corresponding window were carefully verified. Applications of the MCA in the field of transport exist [CHA 13, CES 14, BIL 16] with qualitative variables [SCH 15] and with fuzzified quantitative variables. The independent variables that we have selected are as follows: the participant (P), the driving number (DN), the interface (I), the driving scenario (S) and the freeze situation (F). The data collected during each experimental condition that we present in this chapter is issued for each participant: from questionnaire G on the SA (G1–G6), from the ranking of HCIs in first, second and third positions (respectively FI, SI and TI). There were only 45 datasets for the measures of situational awareness (G) because these were only collected during the first driving

140

Automation Challenges of Socio-technical Systems

number (DN1). Concerning the FI (HCI ranked in first position), SI (HCI ranked in second position) and TI (HCI ranked in third position), they were only provided at the end of the third driving number (DN3). In the following sections, we present the results obtained from this experiment. 4.5. Results and discussions 4.5.1. Situational awareness The situational awareness was evaluated during the first drive. During this drive, questionnaire G was administered when a freeze occurred (see Table 4.8). The questions asked were related to six items: vehicle action; vehicle dynamic; vehicle speed; future vehicle action; number of vehicles in front of the autonomous vehicle; and number of vehicles behind the autonomous vehicle. Level of the SA

Questions

Level 1: perception

G1. What action is your car taking? a. Continuing in lane b. Change of lane to the left c. Change of lane to the right G2. What dynamic is your car undergoing? a. Acceleration b. Deceleration c. Constant speed G5. How many cars are there in front of the autonomous vehicle? 01234 G6. How many cars are behind the autonomous vehicle? 01234

Level 2: comprehension

G3. Is the vehicle below the legal speed limit? Yes No

Level 3: projection

G4. What is the future action of your vehicle? a. Continuing in lane b. Change of lane to the left c. Change of lane to the right

Table 4.8. Questionnaire G on situational awareness

The response to each of these questions (G1–G6) has been coded using two modes: 0 and 1. They correspond respectively to a false answer and a correct answer.

The Design of an Interface According to Principles of Transparency

141

4.5.1.1. Results In Figure 4.7, we present hypothesis tests to evaluate the impact of factors I (HCI), S (Scenario), F (freeze scenario), G (gender), and questionnaire G relative to situational awareness in an inferential context. Each line corresponds to each of the questions in questionnaire G and each column corresponds to each of the five factors. The cyan bars show the histogram for each of the 30 factor/variable pairs and the barographs show the p values. 4.5.1.2. Discussion We have observed a significant effect of variable I on G2 (dynamic of the vehicle). Regarding this question, the number of correct answers was very low in In and the highest number of correct answers was observed in Iw. In other words, the HCI that presents elements relating to the collection of information and the implementation of action allows the participants to have a better perception of the dynamic of the autonomous vehicle than one that does not present any of these elements. One of the specific elements in Iw that highlights the dynamic of the autonomous vehicle is the table of nine boxes, which tends towards validation of the principle of T1 transparency and which stipulates in part that “the driver must be capable of detecting actions (change of lane, continuation in lane, change of speed) that the car is in the process of carrying out, and understanding them”. Similarly, there is a significant effect of I on G6 (number of vehicles behind the autonomous vehicle). Concerning this question, the number of correct answers was very low in Is and the highest number of correct answers was observed in Iw. In other words, with the HCI that displays the collection of information and the implementation of the action, the participants perceived the number of vehicles behind the autonomous vehicle better than with the HCI, in which not only the results of information collection and information analysis, but also the results of decision-making are presented. This result is even more surprising than in Is – there were explanations about the presence of a police vehicle or a vehicle arriving quickly from behind. We establish the hypothesis that this HCI, which presents numerous pieces of information, caused an information overload. Effectively, as Bawden and Robinson have mentioned [BAW 09], a large quantity of pieces of information about an interface can cause a reader to not read them all. These two results tend towards the validation of hypothesis 1 according to which the transparency of HCIs causes differences in the representation of the situation that the agents set up. In summary, from the point of view of cognitive activities, Iw (HCI that shows the information relating to information collection and action implementation) leads to better results from the point of view of the situational awareness.

142

Automation Challenges of Socio-technical Systems

Figure 4.7. Hypothesis tests to evaluate the impact of the factors I, S, SF and G concerning questionnaire G in an inferential context

The Design of an Interface According to Principles of Transparency

143

4.5.2. Satisfaction of the participants The participants carried out three drives and therefore saw three HCIs. At the end of the three drives, the following question was asked of them, by way of conclusion: “by order of preference, which are the HCIs that have best allowed you to understand the system? In first position, second position?”. For example, a participant X who has seen In, Iw and Ii could choose to rank Ii in the first position, Iw in the second position and In in the third position. 4.5.2.1. Results The data are different to the previous for two reasons: – reason 1: the values correspond to relative evaluations, whereas the previous values correspond to absolute values; – reason 2: the 45 participants have not all evaluated the same interfaces, with the exception of the In interface. Therefore, a specific procedure has had to be used. The six graphs in Figure 4.8 clearly show that the In interface is appreciated least of all by the participants, in comparison to the four others. First

Second

Preferences

AC AJ Na RB ET MB AB AG

JM WB AM LL NL SP FL AA

Second

Preferences

First

First

n

w

n

i

First

s

n

t

Third

JB Vb FB TB GV Second

w

t

s

ML MR FC Jb EG Sl na CA

Second

Third

Third n

Preferences

Preferences

Second

w

First

First CD MA Aa MD LM VB NR Jm

Second

Third

Third

Third

SS Mb CT NH EM CB EA SL

n

i

s

n

i

t

Figure 4.8. Ranking of the I interfaces by each participant. For a color version of this figure, see: www.iste.co.uk/vanderhaegen/automation.zip

144

Automation Challenges of Socio-technical Systems

For a more general comparison, Figure 4.9 shows – for each interface – the ratio between the number of times that it has been ranked in first position and the total number of times that it has been tested. For example, for the In interface, these two numbers are worth 9 and 45 respectively, which provides a ratio of 8.9 %. Figure 4.9 shows that the Is interface is slightly better appreciated than the It interface.

Figure 4.9. Ranking of the interfaces in first position

4.5.2.2. Discussion The participants’ preference is less often related to the In interface than to the others, which tends to validate hypothesis 5 which states that the opaque HCI will be the least appreciated by human agents. In addition, interfaces Is and It were the most voted for. On the one hand, this result agrees with the results of Sanders, Wixon and Schafer [SAN 14] who have found that a high level of transparency is preferred by individuals. In addition, this result corroborates a study by Swearingen and Sinha [SWE 02] which suggests that in general people prefer interfaces that are perceived as transparent. On the other hand, given that Is and It were the only HCIs to incorporate the elements of decision-making in comparison with other methods, this leads us to believe that this function brings added value to the transparency of an HCI. The T2 and T3 principles have been validated. They stipulate respectively that “in the established autonomous mode, the driver must be capable of perceiving the intention of the autonomous car (the maneuver that it is going to do) and understand why this

The Design of an Interface According to Principles of Transparency

145

maneuver is going to be carried out” and that “in the established autonomous mode, the driver must be informed of all maneuvers that can interrupt another that is moving (change of plan)”. 4.6. Conclusion In this chapter, we have presented an interface design approach for a car with a high level of automation, an SAE level 4 car that can be in autonomous mode or in manual mode. In the first mode, lateral control and longitudinal control of the car are entirely managed by the controller, whereas in the other, the human agent takes care of these two controls. In order to design an interface that allows the controller to be comprehensible to the human driver and allows the understanding that they have of the situation before reverting to manual mode to be maintained or reconstructed, we have used the notion of transparency by using Lyons’ models. Using this, we have announced 12 principles of transparency thanks to each model [LYO 13]. In order to instantiate each of these principles to define the information to be displayed to the driver, the first two steps of cognitive work analysis have been implemented. Then, we defined five interfaces to be able to distinguish between the contribution of each information according to the information processing functions in Parasuraman et al. [PAR 00] and to demonstrate the interest of principles. These interfaces have been tested in a driving simulator on a sample of 45 people to validate the principles. The hypotheses that we have presented were in relation to the impact of the transparency on the situational awareness and user satisfaction. On the contrary to hypothesis 1, the scores related to situational awareness have not increased with information contributions. In general, the interfaces with the least instantiated functions have resulted in the best answers to questions by the users. Concerning user satisfaction, the interfaces where all the functions were represented were the most appreciated, which corroborates hypothesis 2. Thus, there appears to be a contradiction between the two measurements taken, between the cognitive activities, on the one hand, and user experience, on the other hand. Indeed, display of all the principles is appreciated by the users even though they do not all take part in improvement of their situational awareness. Taking these results into account, several research perspectives come to our attention. In particular, it would be interesting to redo an experiment in which the human agent could carry out a secondary task and to evaluate from then on the situational awareness for each of them. This would allow us to see in particular

146

Automation Challenges of Socio-technical Systems

whether the results that we have obtained are corroborated or unconfirmed. On the contrary, more questions could be integrated into the questionnaire about situational awareness in order to develop it further and to have a more precise representation of the user’s situational awareness. In the medium term, it appears interesting, with the integration of all the previous possibilities, to introduce a phase of take over in order to evaluate the impact of the principles of transparency on the quality of the transition between the autonomous mode and the manual mode.

4.7. Acknowledgments This work was possible thanks to funding from the French government in the context of the PIA program (Programme pour les investissements pour l’avenir – Program for Future Investments) at the SystemX Institute of Research and Technology.

4.8. References [BAI 83] BAINBRIDGE L., “Ironies of automation”, Automatica, vol. 19, no. 6, pp. 775–779, 1983. [BAW 09] BAWDEN D., ROBINSON L., “The dark side of information: overload, anxiety and other paradoxes and pathologies”, Journal of Information Science, vol. 35, no. 2, pp. 180–191, 2009. [BEN 92] BENZECRI J.P., “Validité des échelles d’évaluation en psychologie et en psychiatrie et corrélations psychosociales”, Les cahiers de l’analyse des données, vol. 17, no. 1, pp. 55–86, 1992. [BIL 96] BILLINGS C.E., Human-centered aviation automation: Principles and guidelines, NASA technical memorandum, no. 110381, 1996. [BIL 16] BILLOT-GRASSET A., AMOROS E., HOURS M., “How cyclist behavior affects bicycle accident configurations?”, Transportation Research Part F: Traffic Psychology and Behaviour, vol. 41, pp. 261–276, 2016. [CES 14] CESTAC J., PARAN F., DELHOMME P., “Drive as I say, not as I drive: Influence of injunctive and descriptive norms on speeding intentions among young drivers”, Transportation Research Part F: Traffic Psychology and Behaviour, vol. 23, pp. 44–56, 2014. [CHA 13] CHAUVIN C., LARDJANE S., MOREL G. et al., “Human and organisational factors in maritime accidents: Analysis of collisions at sea using the HFACS”, Accident Analysis & Prevention, vol. 59, pp. 26–37, 2013.

The Design of an Interface According to Principles of Transparency

147

[CRI 09] CRING E.A., LENFESTEY A.G., Architecting human operator trust in automation to improve system effectiveness in multiple unmanned aerial vehicles (UAV) control, PhD thesis, Air Force Institute of Technology, Ohio, United States, 2009. [DEB 06] DEBERNARD S., Coopération homme-machine et répartition dynamique des tâches – Application au contrôle de trafic aérien, HDR, Université de Valenciennes, France, 2006. [DEB 09] DEBERNARD S., GUIOST B., POULAIN T. et al., “Integrating human factors in the design of intelligent systems: An example in air traffic control”, International Journal of Intelligent Systems Technologies and Applications, vol. 7, no. 2, pp. 205–226, 2009. [DEB 16] DEBERNARD S., CHAUVIN C., POKAM R. et al., “Designing human–machine interface for autonomous vehicles”, IFAC-PapersOnLine, vol. 49, no. 19, pp. 609–614, 2016. [END 95] ENDSLEY M.R., “Toward a theory of situation awareness in dynamic systems”, Human Factors, vol. 37, no. 1, pp. 32–64, 1995. [END 16] ENDSLEY M.R., Designing for Situation Awareness: An Approach to User-centered Design, CRC Press, Boca Raton, United States, 2016. [ERI 15] ERIKSSON A., STANTON N.A., “When communication breaks down or what was that? – The importance of communication for successful coordination in complex systems”, Procedia Manufacturing, vol. 3, pp. 2418–2425, 2015. [GOL 14] GOLD C., LORENZ L., BENGLER K., “Influence of automated brake application on take-over situations in highly automated driving scenarios”, Proceedings of the FISITA 2014 World Automotive Congress, Maastricht, The Netherlands, 2014. [GRI 75] GRICE H.P., “Logic and conversation”, Syntax and Semantics, vol. 3, pp. 41–58, 1975. [HOF 15] HOFF K.A., BASHIR M., “Trust in automation: Integrating empirical evidence on factors that influence trust”, Human Factors, vol. 57, no. 3, pp. 407–434, 2015. [JEN 08] JENKINS D.P., STANTON N.A., SALMON P.M. et al., “Using cognitive work analysis to explore activity allocation within military domains”, Ergonomics, vol. 51, no. 6, pp. 798–815, 2008. [JOH 07] JOHNSON R.A., WICHERN D.W., Applied Multivariate Statistical Analysis, 6th ed., Pearson Prentice Hall, Upper Saddle River, United States, 2007. [KIL 14] KILGORE R., VOSHELL M., “Increasing the transparency of unmanned systems: Applications of ecological interface design”, International Conference on Virtual, Augmented and Mixed Reality, Heraklion, Greece, 2014. [KIM 06] KIM T., HINDS P., “Who should I blame? Effects of autonomy and transparency on attributions in human–robot interaction”, The 15th IEEE International Symposium on Robot and Human Interactive Communication, Columbia, United States, 2006.

148

Automation Challenges of Socio-technical Systems

[LEE 04] LEE S.E., OLSEN E.C., WIERWILLE W.W. et al., A comprehensive examination of naturalistic lane-changes, Report, the National Highway Traffic Safety Administration, Washington, United States, March 2004. [LI 17] LI Y., BURNS C.M., “Modeling automation with cognitive work analysis to support human–automation coordination”, Journal of Cognitive Engineering and Decision Making, vol. 11, no. 4, pp. 299–322, 2017. [LOS 01] LOSLEVER P., “Obtaining information from time data statistical analysis in human component system studies (I). Methods and performances”, Information Sciences, vol. 132, nos 1–4, pp. 133–156, 2001. [LOS 14] LOSLEVER P., “Membership function design for multifactorial multivariate data characterizing and coding in human component system studies”, IEEE Transactions on Fuzzy Systems, vol. 22, no. 4, pp. 904–918, 2014. [LOU 15] LOUW T., MERAT N., JAMSON H., “Engaging with highly automated driving: to be or not to be in the loop?”, 8th International Driving Symposium on Human Factors in Driver Assessment, Training and Vehicle Design, Salt Lake City, United States, 2015. [LYO 13] LYONS J.B., “Being transparent about transparency: A model for human-robot interaction. Trust and autonomous systems”, AAAI Spring Symposium, Stanford, ÉtatsUnis, 2013. [MEY 14] MEYER G., DEIX S., “Research and innovation for automated driving in Germany and Europe”, in MEYER G., BEIKER S. (eds), Road Vehicle Automation, pp. 71–81, Springer, New York, United States, 2014. [MIC 85] MICHON J.A., “A critical view of driver behavior models: What do we know, what should we do?”, in EVANS L., SCHWING R.C. (eds), Human Behavior and Traffic Safety, pp. 485–524, Springer, Boston, United States, 1985. [NAI 01] NAIKAR N., SANDERSON P.M., “Evaluating design proposals for complex systems with work domain analysis”, Human Factors, vol. 43, no. 4, pp. 529–542, 2001. [NAI 13] NAIKAR N., Work Domain Analysis: Concepts, Guidelines, and Cases, CRC Press, Boca Raton, United States, 2013. [NAR 08] NARANJO J.E., GONZALEZ C., GARCIA R. et al., “Lane-change fuzzy control in autonomous vehicles for the overtaking maneuver”, IEEE Transactions on Intelligent Transportation Systems, vol. 9, no. 3, pp. 438–450, 2008. [NAT 14] NATIONAL HIGHWAY TRAFFIC SAFETY ADMINISTRATION (NHTSA), Human factors evaluation of level 2 and level 3 automated driving concepts: Past research, Report no. DOT HS 812 043, State of Automation Technology, and Emerging System Concepts, Washington, United States, 2014. [NAU 16] NAUJOKS F., FORSTER Y., WIEDEMANN K. et al., “Speech improves humanautomation cooperation in automated driving”, Workshop Automotive HMI – Mensch und Computer, Aachen, Germany, 2016.

The Design of an Interface According to Principles of Transparency

149

[OLS 03] OLSEN E.C.B., Modeling slow lead vehicle lane changing, PhD thesis, Virginia Tech, Blacksburg, United States, 2003. [OSO 14] OSOSKY S., SANDERS T., JENTSCH F. et al., “Determinants of system transparency and its influence on trust in and reliance on unmanned robotic systems”, Proceedings of International Society for Optics and Photonics, vol. 9084, 2014. [PAR 00] PARASURAMAN R., SHERIDAN T.B., WICKENS C.D., “A model for types and levels of human interaction with automation”, IEEE Transactions on Systems, Man, and Cybernetics – Part A: Systems and Humans, vol. 30, no. 3, pp. 286–297, 2000. [POK 15a] POKAM R., CHAUVIN C., DEBERNARD S. et al., “Towards autonomous driving: An augmented reality interface design for lane change”, FAST-zero’15: 3rd International Symposium on Future Active Safety Technology Toward Zero Traffic Accidents, Gothenburg, Sweden, 2015. [POK 15b] POKAM R., CHAUVIN C., DEBERNARD S. et al., “Augmented reality interface design for autonomous driving”, 12th International Conference on Informatics in Control, Automation and Robotics (ICINCO), pp. 22–33, Colmar, France, 2015. [POK 16] POKAM R., “Vers des logiques et des concepts de représentations : Une séance de créativité”, Congrès ErgoIA : 15ème édition sur l’Ergonomie et l’Informatique Avancée, Biarritz, France, 2016. [RAS 86] RASMUSSEN J., Information Processing and Human–Machine Interaction: An Approach to Cognitive Engineering, Elsevier, New York, United States, 1986. [REV 18] REVELL K., LANGDON P., BRADLEY M. et al., “User Centered Ecological Interface Design (UCEID): A novel method applied to the problem of safe and user-friendly interaction between drivers and autonomous vehicles”, Intelligent Human Systems Integration, pp. 495–501, Springer, New York, United States, 2018. [REY 00] REYMOND G., HEIDET A., CANRY M. et al., “Validation of Renault’s dynamic simulator for adaptive cruise control experiments”, Proceedings of the Driving Simulator Conference (DSC00), pp. 181–191, 2000. [SAL 07] SALMON P.M., REGAN M., LENNÉ M.G. et al., “Work domain analysis and intelligent transport systems: Implications for vehicle design”, International Journal of Vehicle Design, vol. 45, no. 3, pp. 426–448, 2007. [SAN 14] SANDERS T.L., WIXON T., SCHAFER K.E. et al., “The influence of modality and transparency on trust in human–robot interaction”, IEEE International Inter-disciplinary Conference on Cognitive Methods in Situation Awareness and Decision Support (CogSIMA), 2014. [SCH 15] SCHIRO J., LOSLEVER P., GABRIELLI F. et al., “Inter and intra-individual differences in steering wheel hand positions during a simulated driving task”, Ergonomics, vol. 58, no. 3, pp. 394–410, 2015.

150

Automation Challenges of Socio-technical Systems

[SOL 16] SOLOMON B., “GM invests $500 million in Lyft for self-driving car race with Uber, Tesla and Google”, Forbes, available at: https://www.forbes.com/sites/briansolomon /2016/01/04/gm-invests-500-million-in-lyft-for-self-driving-car-race-with-uber-tesla-andgoogle/, 4 January 2016. [SWE 02] SWEARINGEN K., SINHA R., “Interaction design for recommender systems”, Symposium on Designing Interactive Systems, pp. 312–334, 2002. [TRI 14] TRIMBLE T.E., BISHOP R., MORGAN J.F. et al., Human factors evaluation of level 2 and level 3 automated driving concepts: Past research, state of automation technology, and emerging system concepts, Report no. DOT HS 812 043, National Highway Traffic Safety Administration, Washington, United States, 2014. [VIC 02] VICENTE K.J., “Ecological interface design: Progress and challenges”, Human Factors, vol. 44, no. 1, pp. 62–78, 2002. [VYG 12] VYGOTSKY L.S., Thought and Language, MIT Press, Cambridge, United States, 2012.

PART 3

System Reliability

Automation Challenges of Socio-technical Systems, First Edition. Edited by Frédéric Vanderhaegen, Choubeila Maaoui, Mohamed Sallak and Denis Berdjag. © ISTE Ltd 2019. Published by ISTE Ltd and John Wiley & Sons, Inc.

5 Exteroceptive Fault-tolerant Control for Autonomous and Safe Driving

5.1. Introduction In recent decades, the pace at which technological advances have become available has sharply accelerated. In the last 30 years, the personal computer, the Internet, mobile networks as well as many other innovations have transformed human lives. Beyond science fiction-like visions, we have searched for substitutes for repetitive, physically challenging tasks, replacing them with machines or robots reaching beyond human capabilities or with computer programs under the supervision of human beings. Among these machines, the automobile has revolutionized the daily transport of millions of people and the number of cars has grown steadily since the democratization of the vehicle. This explosion of the number of cars gave rise to many problems, such as traffic jams, air pollution due to greenhouse gases and soil pollution associated with liquid and solid discharges (motor oils and heavy metals). Accidents are still one of the biggest road-related problems. The annual report of the French Road Safety Observatory (L'observatoire national interministériel de la sécurité routière – ONISR), estimated for 2015 [ONS 16], revealed that the number of deaths related to road traffic accidents had increased by 1.7% in comparison with the previous year, producing a total of 3,616 fatalities. Even though this number is lower compared to the 1980s, it is still unacceptable. At the global scale, more than 1.25 million people die in road accidents. The question then points to the main causes of these accidents and invites us to find the obstacles we should tackle in order to reduce the number of road accidents. In fact, the analysis of accident technique [ONS 16] revealed that the driver still represents the main cause of accidents: speed represents 32% of accidents Chapter written by Mohamed Riad BOUKHARI, Ahmed CHAIBET, Moussa BOUKHNIFER and Sébastien GLASER.

154

Automation Challenges of Socio-technical Systems

(main or secondary), driving under the influence of alcohol causes 21% of accidents and illegal drugs cause 9% of accidents. Lack of attention, dangerous overtaking, sleepiness, lane changing and sickness accounted for 31% of the cause of accidents. Vehicle factors accounted for only 1% of accidents. Identified causes in a fatal accident

Percentage

Speed

32%

Alcohol

21%

Priority not given

13%

Other causes

12%

Illegal drugs

9%

Unknown cause

9%

Lack of attention

7%

Dangerous overtaking

4%

Sickness

3%

Sleepiness or fatigue

3%

Contraflow driving

2%

Lane changing

2%

Obstacles

2%

Vehicle factors

1%

Telephone use

1%

No safe distance from previous vehicle

1%

Total

122%

Table 5.1. Main causes of accidents in France in 2015 (source: ONSIR) – the total represents 122% since multi-causes were considered

The causes of accidents were not the same for all age groups. In fact, novices and seniors mostly made mistakes related to not giving priority. On the other hand, alcohol and speed were the causes of the majority of accidents among younger people. The rates of accident causality per age group are shown on the histogram in Figure 5.1.

Exteroceptive Fault-tolerant Control for Autonomous and Safe Driving

155

Figure 5.1. Statistics on causes of accidents by age group (source: ONSIR). For a color version of this figure, see: www.iste.co.uk/vanderhaegen/automation.zip

An accident is rarely triggered by a single cause: it is actually a conjunction of several causes, of which speed is often the aggravating factor. For example, driving under the influence of illegal drugs may induce high speed and lack of attention, followed by dangerous overtaking. That being said, the human factor was still the main cause for the majority of road traffic accidents in France in 2015. In fact, humans have to perform a variety of actions while driving: they have to perceive the environment, analyze and understand it, adopt and outline a driving strategy and carry it out through actuators. Tiredness, distraction, miscalculation, unconsciousness, etc. may provoke mistakes in each of these actions and eventually lead to accidents. In addition to road safety, fuel consumption is a concern for the human driver. In addition, Barth and Boriboonsoms in [BAR 09] proved that adopting an eco-driving system could help save between 10% and 20% of fuel. On another note, this economic process could help reduce the emission of carbon dioxide and nitrogen oxides into the atmosphere and thereby reduce atmospheric pollution. [BOU 10, HAJ 10]. In order to respond to these challenges, researchers and engineers have introduced various advanced driver-assistance systems (ADAS). This may explain the design of active safety systems, such as the collision avoidance system [HAL 01] and the lane departure warning system [JUN 04], in order to avoid accidents, or intelligent speed monitoring systems, with the aim of optimizing fuel consumption, as introduced by [BAR 08, AKH 16], or reducing risk on road curves [GLA 10, GAL 13]. However, active security systems struggle to achieve the expected results, due to the fact that they are limited to specific cases, and the interaction of these systems with the driver is still problematic and ambiguous. In fact, several recent incidents have been caused

156

Automation Challenges of Socio-technical Systems

by a misunderstanding between human and machine, or at least by the latter’s limitations. Such technical limitations basically concern navigation systems, which may display non-existent sections of road, which is what happened to an American driver who was using the Waze app and found himself crossing a frozen lake, before the ice broke [AFP 18].

Figure 5.2. Vehicle automation levels (source: Argus)

Automated driving seems to provide a global solution for the problems described above. Ideally, for an automated vehicle, every task involving perception, localization, planning and control should be executed by sensors, algorithms and actuators [JO 14], only allowing the driver to choose the destination to be reached. Based on this principle, several autonomous vehicle projects worldwide are going through the research phase, such as Google’s GoogleCar/Waymo. Others are interested in specific operation areas, such as Renault’s or General Motor’s prototypes. The goal is to achieve a high level of automation (i.e. level 4 or 5 according to the definition by SAE International [SAE 16], as shown in Figure 5.2), considering the driver as a fallback solution during failure scenarios or operating within specific environments (level 4), or in every possible situation (level 5), by providing autonomous monitoring of the vehicle’s environment, the performance of its sensors and algorithms, and the action to be taken in order to keep the vehicle in a safe state. The task of automation seems difficult to accomplish, given the wide range of road scenarios to consider, and the vehicle should be able to react to any damage, and especially to failures in its own means of perception. The diagnosis is therefore an essential step for detecting and isolating sensor faults as soon as these

Exteroceptive Fault-tolerant Control for Autonomous and Safe Driving

157

appear, in order to enable stable and safe driving, enhanced by fault-tolerant control. In this chapter, we will introduce a diagram of fault-tolerant control architecture based on voting algorithms so as to diagnose the faults of exteroceptive sensors. This chapter is organized as follows: a formulation of the problem is illustrated in section 5.2; section 5.3 is devoted to the architecture of fault-tolerant control; voting algorithms are detailed in section 5.4; section 5.5 shows the results of digital simulation and a conclusion is reached in section 5.6. 5.2. Formulation of the problem Given the diversity of situations and problems that a vehicle may encounter, this chapter will focus on the problem of having to regulate inter-vehicle distance. Nevertheless, the real problem in measuring the inter-vehicle distance lies in the inability of technologies to properly operate in all working modes. The use of several sensor technologies helps to overcome each sensor’s limitation, by merging data. However, data fusion may be affected if the sensors have hardware or software faults, which may lead to erroneous measurements. Therefore, it is necessary to manage the reliability of these measurements and to ensure fault tolerance, in order to guarantee safe operation in autonomous mode [MAR 09]. This topic was partly addressed in [MAR 09], where the authors suggested a simple switching strategy to choose between three control loops, each consisting of a given sensor (radar, lidar and a camera), a condition observer and a controller. The switching mechanism chooses the control loop that minimizes a quadratic criterion, from those that can be found in a robust, fault-free invariant set. However, this method shows the disadvantage of being restrictive in the case of longitudinal tracking, since it does not take into account the orientation of the front vehicle. In [REA 15, REA 16], multi-sensor fusion architecture was introduced, minimizing the influence of sensor faults. This technique was based on a fusion structure controlled by an SVM (Support Vector Machine) module for detecting faults. The fault detection block compared the divergences between the outputs in: – two local fusion structures, each using a Velodyne sensor and a vision sensor; – a master fusion structure. The Velodyne sensor is considered as a reference sensor in both local fusion structures. Also, the data from the fusion are based on the weights generated by the SVM block. The purpose of this technique is to ensure that, were an obstacle to be detected, it would be possible to measure the distance between the obstacle and the autonomous vehicle. The disadvantage of this method lies in the fact that it employs cascade detectors, which may increase calculation time.

158

Automation Challenges of Socio-technical Systems

In this context, the purpose of our contribution is to suggest an approach based on voting algorithms so as to reduce the impact of sensor faults and to ensure safe autonomous driving. This method has been widely used for approaching fault tolerance problems. In fact, this technique has been broadly applied in the field of engineering, thanks to the ease of its design and its implementation. It has been applied in many areas such as graphology [ONA 16], medical monitoring [GAL 15], electric vehicles [BOU 13, RAI 16, SHA 17] and aeronautics [KAS 14]. 5.3. Fault-tolerant control architecture In order to ensure its reliability, the autonomous vehicle is equipped with three sensor technologies that may be used for longitudinal control: a long-range radar, a lidar and a Mobileye smart camera, which calculates the distance. Each of these sensors has a specific limitation, and does not work at its best under certain circumstances. For example, as the lidar is based on sending and receiving spectrum light waves, it can lead to aberrant detections in bad weather, such as in the presence of fog, heavy rain or dust particles. In addition, the lidar cannot detect color or contrast, and does not guarantee a good identification of the nature of the objects. On the other hand, the radar may not detect the vehicle ahead, due to the difference in speed between the leading and following vehicles. The radar may also be unable to detect motorcyclists or other vehicles ahead when they are outside the center of the lane. In addition, the smart camera may not work thoroughly if the lens is partially or completely obstructed. The latter does not guarantee a 100% rate of detection of the vehicle ahead. In this way, weather conditions greatly influence the recognition and response capabilities of the smart camera. As we mentioned earlier, each sensor has limitations that mainly depend on the technology employed and its operating area. Each sensor may be faulty in its operating area due to a software or hardware failure, leading to erroneous distance measurement. In order to solve these malfunctions, fault-tolerant control should manage all possible scenarios and keep autonomous driving stable and safe, as shown in Figure 5.3. Voting algorithms are intended to diagnose the faulty sensor by making a comparison between healthy signals, with the aim of ensuring stable and safe driving.

Exteroceptive Fault-tolerant Control for Autonomous and Safe Driving

Reference speed

159

Voting algorithm Lidar + processing Radar + processing

Control

Odometer

Camera + processing

Figure 5.3. Fault-tolerant control architecture, based on voting algorithms

The monitoring speed is set by the reference speed generating block. The latter is deduced from the safety distance relation, resulting from the following relation [TOU 08]: = ℎ

+

[5.1]

Then, from equation [5.1], we can deduce the reference speed as follows: =

[5.2]

where represents the distance to be respected at the stop and ℎ represents the inter-vehicle time (usually between one second and three seconds). 5.3.1. Vehicle dynamics modeling Vehicle dynamics is known for being very complex and strongly interconnected. In order to illustrate the dynamics governing the behavior of a vehicle, it is necessary to resort to a number of simplifying assumptions, so as to reduce the model’s complexity [HED 97, BOU 17b]: – only one fault per sensor will be considered; – the road will be supposed to be flat (with no slopes, no inclination); – the lateral dynamics of the vehicle will not be taken into consideration; – rolling, lacing and pitching movements will not be taken into account.

160

Automation Challenges of Socio-technical Systems

Considering the assumptions made, the longitudinal dynamics of the vehicle can be expressed as follows: ( ) − ( ) , = 1, … ,4 = ∑ ( ) = ( ) − ( )− ( ) − ( ), = 1,2 ( ) = − ( )− ( ) − ( ), = 3,4

̅

[5.3]

When adopting a bicycle model [HED 97, BOU 17a]: =

+

,

=

=

+

,

=

=

+

,

+

,

, ̅ = ̅ + ̅ , and

=

= ( =

+

),

=

,

+

.

=

= (

+

),

We obtain the following equation:

̅

=

( )+

( ) =

( )−

( ) = − By substituting

and

= [ ( )− ( )] − ( )

( )−

( )− ( )− ( )−

( ) ( )−

( )

[5.4]

( )

in equation [5.3], we get: ( )−

( )−

( )−

( )− ̅

Figure 5.4. Longitudinal dynamics of a vehicle

( )− [5.5]

Exteroceptive Fault-tolerant Control for Autonomous and Safe Driving

Notation

Definition

161

Unit

Mass of the vehicle Vehicle speed wheel–ground contact force Aerodynamic force ̅

Overall inertia of the front axle

̅

Inertia of the

front wheel

Angular acceleration of the

.

wheel

Engine coupler Wheel radius

r

Rolling friction of the

wheel

Braking torque of the

wheel

Rolling resistance: front vs. rear axle

/

Inertia of the

rear wheel

Overall inertia of the rear axle Table 5.2. Vehicle parameters

Also, let us assume that the wheels do not slip throughout the maneuver. This assumption can be interpreted by the following equation: =

− max (

, =

which leads to Replacing (

+

)

=0

, and then

=

.

in equation [5.5], we obtain: ̅

)

( )−

= [

( )−

( )−

( )−

( )] −

( ) [5.6]

We then adopt the following notation: ( )+

=

( ),

=

( )+

( )

Equation [5.6] becomes: =

( )−

( )−

( )

[5.7]

162

Automation Challenges of Socio-technical Systems ̅

( ) = ( ) − ( ) − ( ), ( )= ( )− where: =( + ), ( ), and as well as are aerodynamic coefficients. represents the braking and accelerating torque. In order to overcome the delay problem related to the dynamics of braking and accelerating, we will assume that the accelerating/braking pair is governed by a first-order equation, written as follows: ( ) = (−

( ) + ( ))

[5.8]

Finally, the longitudinal dynamics of the vehicle is expressed by the following relation [PHA 97]: ( ) ( )

=

− 0

+



0

( )+

0

[5.9]

5.4. Voting algorithms As shown in Figure 5.3, a voting algorithm is able to choose a safe distance measure only by using sensor measurement signals. This distance is obtained by comparing the signals produced by the sensor/processing units, after which the voting logic chooses the most reliable signal. The voting techniques used for this task are: maximum likelihood, weighted averages and history-based weighted average. 5.4.1. Maximum likelihood voting (MLV) The first work on MLV was published in [LEU 95]. The basic idea of this approach is to choose one of the input signals, ensuring the highest probability in terms of reliability. Thus, for each dynamic reliability dedicated to the sensors represents the number of sensors, written as fi, with i = 1,2,………N, where are calculated based on the equation below [KIM 96, conditional probabilities RAI 16]: Δ =

, if |



|≤

if not

[5.10]

In our case study, we use three sensors (intelligent camera, radar and lidar), so is a fixed real number, corresponding to the threshold (this is fixed at = 3. 10% of the reference range in order to apprehend the effect of noise on the voting is the reference input. algorithm), where

Exteroceptive Fault-tolerant Control for Autonomous and Safe Driving

163

The probability of each input sensor is calculated by the following formula:

=∑



[5.11]



Reference sensor

Conditional probability calculation Δi

Sensor 1 Sensor 2 Sensor N

Likelihood calculation

χi

Output calculation

y Figure 5.5. Maximum likelihood algorithm

The output signal is then chosen in such a way that it satisfies the maximum likelihood: =

min

[5.12]

5.4.2. Weighted averages (WA) The weighted averages method is based on the weight coefficients attributed to each sensor. Therefore, the output of the algorithm represents the average of the measured signals. Furthermore, the output signal is determined as a continuous function of the redundant inputs when we use weighted averages. In fact, this technique aims to reduce the transitory effect, in such a way that no switching is allowed, and the faulty sensor is isolated, with zero weight attributed to it (or a minimal weight, in the practical case) [BRO 75]. Therefore, considering an input signal , with = 1,2, … … … , where is the number of sensors, the output signal is determined by the following equation: =





[5.13]

164

Automation Challenges of Socio-technical Systems

where , with = 1,2, … … … , are the weight coefficients. These coefficients correspond to the sum of the inverse distances of all input signals compared to the others, so that if the signals are close to the others, it implies a high weight.

Figure 5.6. Weighted averages algorithm

In addition, the weight is obtained as follows: =∑



,

= 1, … . ,

[5.14]

where are the inverse of the distances between the input signals, which are represented by: =

10 if



=0

if not

[5.15]

5.4.3. History-based weighted average (HBWA) The underlying idea of this technique is to use a history that informs about the state of the sensors. Indeed, over time, a reliability index is accumulated for each sensor, so that the sensor that is chosen at the output of the algorithm has the highest reliability. Latif-Shabgahi, Bass and Bennett [LAT 01] presented two philosophies for this technique. The first one, called agreement state indicator weighted average, employs a dynamic weighting function based on the indicator’s status. Then, the weights are proportionally related to the indicator history. The second method, called module elimination weighted average, calculates an average of the indicator’s history, and then every entry with an historic indicator below this average is considered unreliable; therefore, its weight is set to zero. The HBWA technique is based on the algorithm introduced below:

Exteroceptive Fault-tolerant Control for Autonomous and Safe Driving

1) For input signals =

and

, coefficients

[5.16]

where , = 1,2, … … …

and ≠ .

2) Using adjustable threshold parameters is given by: 1 if 1−

=

3) The level of consensus

and q, the agreement level for each input



if
0 so that

, ∀t > 0.

8.2.3.1. Remarks

– An autonomous system is a system whose laws are invariant in time. – A Lipschitz function is a function whose rate of evolution is smaller than a constant value, known as Lipschitz constant value [KHA 96]. – Value δ delineates the zone of attraction of the stable equilibrium point. – If the zone of attraction of an equilibrium point is zero, then this equilibrium point is unstable. – Exponential stability characterizes the speed of convergence of the system and we will not focus on it here. 8.2.3.2. Discussion

A system can only be stable if it has stable equilibrium points and if its rate of evolution is limited. In the absence of constraints (control or set point), a stable system manages to absorb the deviations from its stable equilibrium point and may even manage to correct them if stability is asymptotic. A control law only translates the stable equilibrium point of the system (stable) to a value desired by the operator controlling the system. Therefore, a stable system will be able to contain, or even correct, the deviations suffered during its operation and stay at the desired set value. This compensation capacity is directly related to the

The Impact of Human Stability on Human–Machine Systems

265

zone of attraction of the equilibrium point of the system and, consequently, on its dynamics. In order to be able to gauge the stability of a system, Lyapunov suggested an algebraic evaluation criterion [LYA 92]. It is introduced below for reference only. 8.2.4. Lyapunov’s theorem n If there is a so-called Lyapunov V ( x) : R → R function, such that:

– ∃V1,V2 : R+ → R+ non-decreasing, so that V1 ( x ) < V ( x) < V2 ( x ) ; – ∃V3 : R+ → R+ non-decreasing, and V3 (s) > 0, ∀s > 0 , so that V ( x(t )) < −V3 ( x(t ) ) ; then the system is trivially asymptotically stable. 8.2.4.1. Discussion

The interest of this theorem is to be able to rule over the stability of a system as soon as its evolution is compatible with a specific “template”. The second particular condition determines whether the studied system possesses a stabilizing dynamic, which is also invariant in time. Therefore, a stability criterion should be defined by the quantifiable and invariant variation compensation potential. Thus, a resilient system is naturally stable. 8.3. Stability in the human context

Here, we will offer a definition of stability within a purely human context. 8.3.1. Definition of human stability

Human stability is the ability of the human to maintain certain mental and physiological conditions in a situation without any order or task to be performed, as long as the external disturbances to which it is subjected do not exceed a given threshold. These states are called mental and physiological states of equilibrium, respectively. Each state is associated with an area of attraction. 8.3.1.1. Hypothesis

It is generally assumed that a healthy human being has at least one state of equilibrium for each aspect: mental and physiological.

266

Automation Challenges of Socio-technical Systems

Compared to Lyapunov’s definition of stability, here human stability concerns the capacity of the human to fulfill a certain assignment following its capacity to contain the effects of external disturbances. The assignment only translates the equilibrium point, without calling stability into question, which is an intrinsic notion. 8.3.1.2. Remarks

There are other definitions of this type of stability [RIC 10]. However, it is wise to put forward a definition that differentiates between the guideline or order given to the human and their capacity to accomplish it, and the human potential to resist or to compensate for an external disturbance element. The question now is how to be able to determine the stability or instability of human equilibrium points. Moreover, it is known that the characteristics of humans are not invariant over time, particularly during periods of intense solicitation [CAB 92], whether throughout the duration of an assignment or in the medium term. It is therefore utopian to speak of an overall human stability. In fact, the only possibility of verifying that equilibrium points are stable would be to perform regular medical and prior psychological examinations. The validity of these examinations could only be temporary, as the duration of validity depends heavily on the workload, as well as on experienced stressful situations. Sometimes, certain traumas will permanently alter the psychological and psychological equilibrium of a person. Therefore, it might be necessary to limit the time window considered for assessing the invariance of the characteristics of the human under scrutiny within the framework of a single assignment. Otherwise, a stable equilibrium state might turn into an unstable state, if the duration of the assignment exceeds a reasonable length of time [CAB 92]. Therefore, a more realistic approach to the problem would be to suggest a notion of momentary stability. Each person would then be associated with a potential of action and reaction (PAR) depending on their intrinsic qualities (strength, resistance, etc.), their training and mastery of the assignment performed and finally their momentary emotional state. PAR can be interpreted as a quantification of resilience, but is not limited to the adjustment defects as such. It tends to decrease depending on the workload carried out, the assignment’s environment and its duration. It decreases very quickly during the management of extraordinary situations (crises, unforeseen events, increased responsibility, etc.).

The Impact of Human Stability on Human–Machine Systems

267

8.3.2. Definition of the potential of action and reaction

Potential of action and reaction is a quantitative parameter that describes a person’s ability to respond to anticipated or unexpected external demands. It is a contextualized parameter related to a specific assignment. The PAR has a direct influence on the attraction of a person’s equilibrium points. The higher the PAR, the better the person will be able to handle the tasks to be performed, as well as any expected or unexpected situations, without the risk of making errors. On the contrary, if the PAR decreases to a certain minimum, the zone of attraction of the equilibrium points will be drastically reduced, which will de facto turn stable equilibrium points into unstable equilibrium points. In that case, the human might lose their capacity for resilience. 8.4. Stabilizability

Not all existing systems are inherently stable. Nevertheless, some systems are used with particular operating constraints: the use of the so-called stabilizing control. This stabilizing control law makes it possible to maintain the evolutionary trajectories of the system parameters around unstable equilibrium points, without an inherent attraction zone. This makes the new system, including the system and the controller with the stabilizing control, factually stable. This stabilizing control can only exist in the case of a closed-loop system, within a regulation structure (Figure 8.1). Reference

Output Comparator

Controller

System

Sensors

Figure 8.1. Regulation loop: closed-loop control system

Clearly, the controller must be sufficiently reliable so as to ensure that an unstable system does not diverge during the execution of the assignment. In addition, not all systems are necessarily stabilizable. In order to keep a system around its unstable equilibrium points, it is necessary for the system to be controllable, that is, it must be possible to reach any state of the system, from any other original state, using an appropriate control law, and within a specific time frame [BRO 83]. This controllability property is a structural property of the system that is defined depending on the physical limitations of such a system in relation to the controller.

268

Automation Challenges of Socio-technical Systems

In general, any system, be it technological or human, has limitations; hence the local controllability is used, around the equilibrium points, adapted to the goals of an assignment. Going back to the notion of PAR and human stability, stabilizability interpreted as the orders or information that the operator receives in view increasing their PAR. This can be an alarm or the announcement of a state emergency, which mobilizes the human operators and helps them to anticipate extraordinary situation.

is of of an

8.5. Stability within the context of HMS

The stability of HMS significantly differs from human stability in that it is mainly related to the execution of a predefined task within the context of the HMS, in the presence of a technological system with its own stability properties and in the presence of ancillary technological elements influencing the stability of the human operator. While the technological system is generally stable or stabilizable, unforeseen situations may occur, and the embedded controller ends up not being enough. In this context, the HMS operator contributes to the stabilization of the technological system in unforeseen situations, be them dangerous or not, in addition to inherent technologically stabilizing control. Thus, the system’s resilience is greatly improved [AMA 96].

Figure 8.2. Constituents and characteristic parameters of the HMS in the context of an assignment

The Impact of Human Stability on Human–Machine Systems

269

The actors and factors influencing the HMS within the context of a specific assignment are shown in Figure 8.2. A first contribution to the stability of the system lies in the design of the technological system and the communication interface between the operator and the machine, adapting its tools to human behavior [MIL 99]. The ergonomic design of a human–machine interface (HMI) can help to limit PAR decrease during the length of the assignment by reducing physical, sensory and mental fatigue. The “stabilization” of the human operator can be achieved by adding a supervision module, which acquires HMS information by using on-board sensors for data acquisition regarding the actions and status of the operator, the state and the parameters of the technological system and the environmental context. If a dangerous situation is detected, the supervision module triggers an alert. A first discussion on this type of “supervisor” was engaged in [BER 11]. Another stabilizing factor of the HMS would be to include a hierarchical instance that would play the role of a regulator within the regulation loop (Figure 8.3). This body would be made up of one or more human operators who would supervise the HMS in case the need for such supervision emerges. The supervision module would emit an alert concerning the appearance of this situation. Human-Machine System Higher Instance

Human

Technological System

Supervision Module

Figure 8.3. Closed-loop human–machine system

8.6. Structure of the HMS in the railway context 8.6.1. General structure

The studied HMS includes the human operator, the HMI, the rail transport system, the supervisor and all sensors and auxiliary computers. The system has three “regulatory closed loops”: a closed loop including the technological system and its regulator, a closed loop including the human operator as controller of the technological system and finally a closed loop including the HMS and the higher supervisory authority, such as the control center or the so-called PCC, “Poste de Commande Centralisé” in French. The fourth closed loop of the system is an HMS

270

Automation Challenges of Socio-technical Systems

Transmission

Influence

Sensors

Supervision Module

Alert

Sensors

Higher Hierarchical Level Control Center

Assignments Assignements Instructions Instructions

Human Operator

Orders

Environment

Controller

Rail System

Output

internal supervisory loop including the human operator and the supervision module. The expected structure of the system is shown in Figure 8.4.

Figure 8.4. The HMS structure. For a color version of this figure, see: www.iste.co.uk/vanderhaegen/automation.zip

The Impact of Human Stability on Human–Machine Systems

271

8.6.2. The supervision module

Preprocessing

The supervision module is introduced in Figure 8.5. This module is an essential HMS stabilizing factor due to its role as alert provider. Its operating principle is to compare the sequences of measured data and the sequences obtained by simulation. The BCD block is built following the principle of the benefit–cost–deficit model [NPV 11]. It is used for estimating the amount of uncertainty in the system and for propagating this estimate to the different functional blocks. In order to simulate sequences for comparison, it is necessary to model the HMS.

Figure 8.5. Supervision module

8.6.3. The technological system model

The technological system is a rail transport system. It is modeled by a hybrid automaton, formed by a finite state machine (FSM) describing the transitions between the different modes of operation and their continuous representations. An operating regime of this system is modeled in continuous time by a state space dynamical model:

 x = f ( x, u ) + v   y = g ( x, u ) + w

272

Automation Challenges of Socio-technical Systems

where x ∈ Rn represents the status of the system, u ∈ R m represents the control law, y ∈ Rl represents measurable variables and functions f and g are, respectively, the state and output functions:

f : Rn → Rn g : R n → Rl In the context of this study, the variables of interest are the movement speed of the train and the internal operating parameters of the vehicle, which make it possible to determine the corresponding operating speed. In this chapter, we will only consider speed, since the operating regime does not change in the scenario under study 8.6.4. The human operator model

The human operator model was introduced in [BER 11] and is shown in Figure 8.6.

Figure 8.6. Human operator model

It is made up of two non-deterministic finite-state automata (for instance, hidden Markov chains) [CAR 12]. The emotional model is an initial FSM describing the considered set of emotional states, whereas the behavioral model is a second FSM describing the various possible actions of the operator. Figure 8.7 and Table 8.1 show a modeling example of this type.

The Impact of Human Stability on Human–Machine Systems

273

Figure 8.7. Modeling example

Probability of transition to new condition (sleepy/balanced/nervous) 1 Deceleration

2 Speed/lane monitoring

3 Other states

1 Deceleration

0.9/0.25/0.4

0.05/0.7/0.3

0.05/0.05/0.3

2 Speed/lane monitoring

0.05/0.7/0.4

0.9/0.25/0.3

0.05/0.05/0.3

3 Other states

0.01/0.45/0.3

0.01/0.45/0.3

0.98/0.1/0.4

Current state

Table 8.1. Probability of transitions between states

8.7. Illustrative example 8.7.1. Experimental protocol

The experiment was carried out on the COR&GEST small-scale railway simulation platform (Figure 8.8(a)). The platform is made up of a reduced railway model, with a cabin for the rail driver and a supervisory position. The driving interface (HMI) is shown in Figure 8.8(b).

274

Automation Challenges of Socio-technical Systems

(a) Small-scale rail model

(b) Driving interface Figure 8.8. The COR&GEST platform

For the purposes of the tests, the driving position was equipped with Tobii eye sensors and a Face Reader face recognition system featured by Noldus. The eye-tracking system made it possible to follow the direction of the driver’s gaze on the projection of the interface in real time (red dots in Figure 8.9(a)). The facial recognition system made it possible to estimate the similarity between the driver’s facial expression and the six state-of-the-art facial expressions. The driver

The Impact of Human Stability on Human–Machine Systems

275

could be qualified as: neutral, happy, angry, frightened, disgusted, sad or surprised (Figure 8.9(b)). Each hypothesis was associated with a degree of likelihood. The experiment was conducted on a group of non-expert subjects who drove the vehicle on the platform in a scenario with unforeseen faults: door lock and brake failure. The constraints of the scenario were to respect speed signaling (speed limitation by sector) and mandatory stops at stations. To avoid a decrease in the PAR, which might distort the results and considering the degree of knowledge of the subjects concerning rail driving, the scenario lasted 15 minutes.

(a) Eye-tracker output

(b) Facial recognition output Figure 8.9. Example of sensor’s outputs. For a color version of this figure, see: www.iste.co.uk/vanderhaegen/automation.zip

276

Automation Challenges of Socio-technical Systems

Figure 8.10. Evolution of the train speed compared to the instructions given. For a color version of this figure, see: www.iste.co.uk/vanderhaegen/automation.zip

The Impact of Human Stability on Human–Machine Systems

Figure 8.11. Occurrence of faults

Figure 8.12. Driver’s estimated emotions

277

278

Automation Challenges of Socio-technical Systems

Figure 8.13(a). Horizontal movement of the gaze

The Impact of Human Stability on Human–Machine Systems

279

Figure 8.13(b). Horizontal movement of the gaze

8.7.2. Experimental results

Figure 8.10 shows the evolution of train speed during the driving scenario for subject nos. 3 and 4 and the imposed speed limitations. Let us observe that speed limitations appear as signals (traffic signs) on the driving interface. Subject no. 3 did not experience any failures, while subject no. 4 was confronted with two doorrelated faults and a brief brake anomaly (Figure 8.11).

280

Automation Challenges of Socio-technical Systems

Subject no. 3 is presented here as a reference, while the study focuses on the follow-up of subject no. 4. Figure 8.12 shows the output of the facial recognition system; a 7th state is added to the six classical states, corresponding to the failure to recognize the facial expression. Figure 8.13 shows the horizontal evolution of the eye of subject no. 4 over four short periods of time during the scenario: Figure 8.13 (i) corresponds to the beginning of the scenario, and Figure 8.13 (iv) corresponds to the end of the scenario. On the vertical axis, the direction of gaze is given by an X coordinate expressed in pixels on the screen. The null value corresponds to a measurement failure, caused by an obstruction, a sudden movement of the head or simply a very fast movement of the eyes. 8.7.3. Remarks and discussion

When comparing the driving performances between subjects 3 and 4 (Figure 8.10), their superficial knowledge of the railway driving system should be taken into account. Due to this relative and contextual cognitive deficiency, their PARs significantly dropped during the scenario. Subject no. 3 managed to follow the speed instruction fairly satisfactorily over most of the course. The only unsatisfactory phases were at times 15:34 and 15:36, at the beginning of the course. Their performances remained relatively constant throughout the scenario. Subject no. 4 showed a different behavior. The subject had a tendency to overreact during acceleration (17:04, 17:05 and 17:08) and braking (17:06 and 17:09). It is also clear that this trend was exacerbated halfway through the scenario. There was clearly a change in the behavior of subject no. 4 after experiencing failure situations. Drawing a parallel with the fault occurrences (Figure 8.11, at times 17:06, 17:09 and 17:11), it appears that the door defects and the associated alarm caused an amplification of the natural tendency of the subject to overreact. The most serious failure (brake failure) had a smaller effect on driving. Perhaps the cognitive deficiency of the subject did not allow them to understand the consequences of this fault. On the other hand, the long duration of the faults and the alarm disturbed them. At the first occurrence of failure, at 17:06, the subject was surprised (Figure 8.12). Results present a higher frequency of the subject expressing fear, disgust and anger, all three proof of a negative state of mind of the driver, who was assessing their own performance at that moment. On the other hand, it is clear that the measurement of the facial expression is not very reliable. States change too fast

The Impact of Human Stability on Human–Machine Systems

281

to expect a real-time assessment of the driver’s emotional state. A certain logic appears if the measurements are taken over a larger time window, which allows us, if necessary, to confirm the “negative” emotional state of the subject. Eye-tracking measurements, on the other hand, offered better results (Figure 8.13). The subject started the scenario with a calm state of the mind, their eye movements being quite contained and slow, as shown in Figure 8.13(a) and (b). Following the fault that caught their visual attention (Figure 8.13 (ii) 17:06), the gaze of the subject began to be periodically focused on the speed indicator, and sometimes on the alert indicator. Then, from 17:08, we can see a disruption, due to the overrunning and stopping of the engine, which lasted until the subject was able to calm down. Another disruption occurs when the second fault appeared, which attracted the driver’s attention (Figure 8.13 (iii)) and altered their state of mind, since a difference in the speed of evolutions between Figures 8.13 (iii) and (iv) can be identified. The results of the experiments indicate that the quantitative measurements on the driver are more accurate than qualitative measures. However, the analysis of the driver’s behavior can help us detect the occurrence of an event that is not directly observable. From this perspective, the use of the model based on FSMs to detect certain types of behavior is justified, since it gives us the most reliable indication. In addition to the factors that intuitively influence the system’s stability, such as the duration of the assignment or the occurrence of unforeseen events, less intuitive factors emerged, including the drivers’ self-assessment concerning their own performance, which is a psychological criterion. The resulting sense of disappointment contributed to significantly reducing the driver’s PAR. The comparison between the behaviors of subject nos. 3 and 4 shows that this factor is not negligible even during brief assignments. 8.8. Conclusion

In this chapter, definitions of stability and stabilizability were proposed, as well as suggestions for the design of stable and, therefore, resilient human–machine systems. This design is based on a closed-loop structure. A parameter for assessing such stability was presented: the driver’s PAR (Potential for Action and Reaction), which must be gauged during the assignment using a specific supervision module. Stability indicators were also studied; the eye sensor was found to be superior to the facial recognition module. Finally, psychological factors, such as negative self-assessment, may lead to the instability of the human–machine system.

282

Automation Challenges of Socio-technical Systems

In the short term, an experimental database of a group of professional drivers during 2-hour experiments will be studied. The results of this study will validate the concept of a supervision module, which, so far, has not been tested under realistic conditions. 8.9. References [AMA 96] AMALBERTI R., La conduite des systèmes à risque, PUF, Paris, 1996. [BER 11] BERDJAG D., CAULIER P., VANDERHAEGEN F., “New challenges for the multi-criteria and multi-objective diagnosis of human-machine systems”, IFAC Workshop on Human-Machine Systems, Berlin, Germany, October 2011. [BRO 83] BROCKETT R.W., “Asymptotic stability and feedback stabilization”, in MILMANN R.S. (ed.), Differential Geometric Control Theory, pp. 181–191, Birkhäuser, Basel, 1983. [CAB 92] CABON P., Maintien de la vigilance et gestion du sommeil dans les systèmes automatisés : recherche de laboratoire applications aux transports ferroviaires et aériens, PhD thesis, University of Paris V, 1992. [CHE 07] CHEN C.M., LIN C.W., CHEN Y.C., “Adaptive error-resilience transcoding using prioritized intra-refresh for video multicast over wireless networks”, Signal Processing: Image and Communication, vol. 22, pp. 277–297, 2007. [COT 07] COTHEN G.C., Role of human factors in rail accidents, Report, Federal Railroad Administration, available at: https://www.transportation.gov/content/role-human-factorsrail-accidents, March 2007. [ENJ 17] ENJALBERT S., VANDERHAEGEN F., “A hybrid reinforced learning system to estimate resilience indicators”, Engineering Applications of Artificial Intelligence, vol. 64, pp. 295–301, 2017. [GOU 08] GOUSSÉ V., “Apport de la génétique dans les études sur la résilience : l’exemple de l’autisme”, Annales Médico-psychologiques, revue psychiatrique, vol. 166, no. 7, pp. 523–527, 2008. [HOL 06] HOLLNAGEL P., “Resilience – the challenge of the unstable”, in WOODS D.D., HOLLNAGEL P. (eds), Resilience Engineering – Concepts and Precepts, CRC Press, Boca Raton, pp. 9–17, 2006. [KHA 96] KHALIL H., Nonlinear Systems, Prentice Hall, Upper Saddle River, 1996. [LYA 92] LYAPUNOV A.M., The general problem about the stability of motion, PhD thesis, University of Kharkov, Ukraine, 1892. [MIL 99] MILLOT P., “Systèmes Homme-Machine et Automatique”, Journées Doctorales de l’Automatique JDA’99, Nancy, France, 1999. [NAK 07] NAKAYAMA H., ANSARI N., JAMALIPOUR A. et al., “Fault-resilient sensing in wireless sensor networks”, Computer Communication, vol. 30, pp. 2375–2384, 2007.

The Impact of Human Stability on Human–Machine Systems

283

[NUM 06] NUMANOGLU T., TAVLI B., HEINZELMAN W., “Energy efficiency and error resilience in coordinated and non-coordinated medium access control protocols”, Computer Communications, vol. 29, pp. 3493–3506, 2006. [RAC 12] RACHEDI N.D., BERDJAG D., VANDERHAEGEN F., “Détection de l’état d’un opérateur humain dans le contexte de la conduite ferroviaire”, Lambda Mu 18, Tours, France, 2012. [RIC 10] RICHARD P., BENARD V., VANDERHAEGEN F. et al., “Vers le concept de stabilité humaine pour l’amélioration de la sécurité des transports”, 17e congrès de maîtrise des risques et de sûreté de fonctionnement, La Rochelle, France, 2010. [VAN 11] VANDERHAEGEN F., ZIEBA S., ENJALBERT S. et al., “A benefit/cost/deficit (BCD) model for learning from human errors”, Reliability Engineering and System Safety, vol. 96, no. 7, pp. 57–766, 2011. [VAN 17] VANDERHAEGEN F., “Towards increased systems resilience: new challenges based on dissonance control for human reliability in Cyber-Physical & Human Systems”, Annual Reviews in Control, vol. 44, pp. 316–322, 2017. [ZIE 07] ZIEBA S., JOUGLET D., POLET P. et al., “Resilience and affordances: perspectives for human-robot cooperation?”, 26th European Annual Conference on Human Decision Making and Manual Control, Copenhagen, Denmark, 21–22 June 2007. [ZIE 10] ZIEBA S., POLET P., VANDERHAEGEN F. et al., “Principles of adjustable autonomy: a framework for resilient human machine cooperation”, Cognition, Technology and Work, vol. 12, no. 3, pp. 193–203, 2010. [ZIE 11] ZIEBA S., POLET P., VANDERHAEGEN F., “Using adjustable autonomy and human-machine cooperation for the resilience of a human-machine system: application to a ground robotic system”, Information Sciences, vol. 181, pp. 379–397, 2011.

PART 5

Innovative Design

Automation Challenges of Socio-technical Systems, First Edition. Edited by Frédéric Vanderhaegen, Choubeila Maaoui, Mohamed Sallak and Denis Berdjag. © ISTE Ltd 2019. Published by ISTE Ltd and John Wiley & Sons, Inc.

9 Development of an Intelligent Garment for Crisis Management: Fire Control Application

9.1. Introduction Considered as a wearable system, an intelligent and connected garment represents an opportunity for meeting the specific needs of various populations. An intelligent garment is capable of carrying out online monitoring of the wearer’s health and well-being thanks to the use of sensors close to his/her body and embedded in the textile. Based on measured data, it can provide intelligent services to the target population (the elderly, the disabled, soldiers, security agents, athletes, etc.) in order to manage and optimize their day-to-day activities, such as physical exercise control, geolocation, monitoring and forecasting of chronic diseases, as well as helping to cope with food nutrition control, stress and depression, disease risk management, injuries and shortcomings [SUH 10]. In practice, an intelligent garment builds a second skin used as an interface between the wearer and his/her environment. During the design phase, it makes it possible to integrate the following components: sensors for monitoring physiological signals (cutaneous temperatures, breathing, heart rate, amount of sweat, movements, etc.), as well as environmental signals (air temperature, humidity, air pollution, etc.); actuators for controlling garment movement and displaying visual effects; a microcontroller for collecting and processing measured data; a communication system with a cloud platform; a power source; and conductive threads connecting all electronic components [GAT 07]. In addition, the aesthetics and comfort of the Chapter written by Guillaume TARTARE, Marie-Pierre PACAUX-LEMOINE, Ludovic KOEHL and Xianyi ZENG.

288

Automation Challenges of Socio-technical Systems

wearer should also be taken into account during the design of the intelligent garment. In contrast to existing connected objects (jewels, watches, necklaces, etc.) which are often highly visible, for an intelligent garment, the instrumented elements should be less visible so as to ensure an acceptable appearance and comfort for discretely embedding intelligence requirements. Fabric- and garment-manufacturing processes should be carefully designed, by taking into account signal reliability, human comfort, ease of maintenance and visual satisfaction [CHO 10]. An example of an intelligent garment is shown in Figure 9.1 [DAR 11].

Figure 9.1. An example of smart clothing

A powerful application of intelligent garments is related to risk management in hostile environments (fires, floods, toxic gas leak incidents, etc.). Let us consider the example of fires. An intelligent garment is very useful for helping the wearer, in this case a firefighter, master the situation in the best possible way, without disturbing them while they are carrying out their task, and monitoring their health condition in

Development of an Intelligent Garment for Crisis Management

289

a very sensitive context. In fact, during this intense effort, the firefighter loses awareness about the danger to their own health due to the tunneling effect surrounding them. In addition, a serious stress peak or physical exhaustion could largely penalize their performance throughout their intervention [JOV 03]. At the same time, in such a situation, the firefighter hardly recognizes their target and trajectory due to the low visibility related to smoke and noise. Thus, it is necessary to conduct online (in situ) monitoring of the firefighter’s health and external environment in order to quickly report information about the actual situation regarding the fire evolution and its potential risks, both to the firefighter and to the command center. In extremely dangerous situations, it is desirable to carry out operations simultaneously combining the cooperation of human firefighters and robots. In the latter case, intelligent clothing should be designed to optimally ensure communication and cooperation modes with such robots. In this chapter, we will present an intelligent wearable system based on the design of an instrumented garment specially designed for firefighters. This intelligent garment incorporates several sensors such as accelerometers in order to measure a set of characteristics related to the health of the wearer and directly communicate them to outside. Based on measured data, this wearable system can predict high-level information (danger situation, state of stress and fatigue, etc.) in order to help firefighters and command centers to make relevant decisions. In the design of the intelligent garment, a compromise between the quality of the measured signals and the comfort of the wearer should be taken into account. Concretely, the sensors were placed at key points on the garment to ensure a perfectly controlled pressure between the textile structure and skin. A knitted textile structure was introduced in order to maintain comfort and a perfect relevancy of the material related to the wearer’s morphology, allowing us to minimize noises of measurements and control contact of sensors on the skin. In addition, flexible conductive threads directly integrated into the textile structure were used to simultaneously connect the sensors, microcontroller and battery. By learning from data measured by the garment’s sensors, a local decision support system was created in order to relate fatigue and stress indicators to the measured physiological signals. In this way, high-level information locally extracted from the garment can be transmitted to the command center, which can provide more relevant orders so as to offer better protection for the firefighter, while disrupting their actions as little as possible. In our study, the garment designed for the firefighter should be able to guarantee communication with the robots in order to plan the optimal path towards a predefined target (an injured person or a device making it possible to decrease fire intensity) in a smoky space, as well as to minimize the risk incurred by the firefighter. In fact, in the context of our research, a cooperative robot essentially

290

Automation Challenges of Socio-technical Systems

plays the role of guiding the firefighter towards specific areas in a situation where visibility is poor, in order to carry out complex tasks that a machine might not be able to fulfill. At the same time, the precise position of the firefighter at each moment can be determined by using the very same sensors mounted on the cooperative robot. In this way, the firefighter’s power of perception in relation to the environment can be strengthened thanks to the cooperation between the robot and the firefighter’s increased sensitivity while they are is wearing the intelligent garment. This chapter is organized as follows. In section 9.2, we will outline the design process of the intelligent garment for firefighters, including the general architecture of the wearable system, choice of sensors and microcontroller, textile design and integration of the electronic components. In section 9.3, we will discuss signal processing methods used for extracting information about fatigue and stress conditions and provide an analysis of the results obtained. In section 9.4, we will propose a cooperation plan between a robot and a firefighter wearing the smart garment. Finally, a conclusion will be provided in section 9.5. 9.2. Design of an intelligent garment for firefighters 9.2.1. Wearable system architecture In our study, the development of the intelligent wearable system includes the following steps: 1) selection of electronic components (sensors, actuators, microcontroller and communication unit), according to the objective and the application range of the system; 2) textile/garment design, including the choice of materials and garment pattern creation, ensuring signal quality as well as appearance and comfort of the wearer; 3) integration of the selected electronic components into the garment; 4) setting up communication with the cloud platform; 5) signal processing for removing noises, extracting relevant characteristics and developing the local decision support system characterizing the relationship between the extracted characteristics and the health or well-being state (danger, fatigue, stress, etc.). The general diagram of the proposed wearable system architecture is shown in Figure 9.2.

291

Figure 9.2. Architecture of the proposed portable smart system

Development of an Intelligent Garment for Crisis Management

292

Automation Challenges of Socio-technical Systems

9.2.2. Choice of electronic components In our study, due to its small size, ease of implementation, reasonable power consumption and relative computing capacity, the Arduino Micro® board using the ATmeg32U4 microcontroller was chosen to carry out the garment’s data processing. For the radio frequency communication needs, the microcontroller board is equipped with terminal nodes and a star network topology with a network coordinator, in order to handle all data exchanges between the intelligent garment, the command center and the robot, in compliance with the ZigBee protocol. The use of the communication module enables a user to create his/her own configuration for transmitting wireless data packets from the garment to the external computing stations, including the steps of verification and error correction. The ZigBee protocol is an economic protocol in terms of energy use and an efficient tool within the scope of the envisaged scenario. In our study, depending on the nature of the data, we adjust the transmission configuration to one data packet per second. The communication mode makes data transmission more flexible and more reliable. As regards the selection of sensors to monitor the stress and fatigue of firefighters during their operations, we consider that the most important physiological characteristics are heart rate and respiratory cycles. Hence, we take into account several accelerometers in the measurements [SCH 15]. The choice to use accelerometers instead of an ECG is related to the fact that we are only interested in the main peaks of cardiac signals, instead of the different waves that ECGs may show. In addition, respiratory cycles can be accurately measured by accelerometers, while this is impossible to do so with ECGs. In our study, we chose a three-axis analog accelerometer that can perfectly detect the movements of the human body, breathing or heartbeats. With a widespread use, this type of sensor is very small and easy to be integrated into textiles, inexpensive, visually discreet and consumes very little energy. It can measure a gravity of approximately ±3 g. The sensor output is an analog voltage signal proportional to acceleration with a sensitivity of 360 m V/g as well as a field of 0.5–1,600 Hz for the X and Y axes and 0.5–550 Hz for the Z axis. 9.2.3. Textile design and sensor integration The garment design should make it possible to integrate the different sensors at the appropriate positions between two layers of fabric in order to maintain a good quality of measured signals without causing any uncomfortable sensation. However, the design of an intelligent garment is subject to several constraints. The most important one is to keep close and stable contacts between the integrated sensors and wearer’s skin at key positions, in order to ensure optimal signal measurement without disturbing the firefighter’s movements. This requires the choice of the most

Development of an Intelligent Garment for Crisis Management

293

relevant textile parameters: the best fabric texture (binding/yarn intertwining within the fabric), the best suited garment design and the use of comfortable fibers and threads for the wearer. As regards textile design, the fabric should be soft, elastic and resistant, so that the garment can apply pressure ad hoc when it comes in contact with the wearer’s skin. After a series of tests and comparisons between several different materials, we identified the most relevant fabric for collecting signals of interest, that is, a knit fabric made from a mixture of polyamide (90%) and elastane (10%) fibers knitted into a Jersey pattern. This material can effectively guarantee the properties related to softness, resistance to friction and elasticity. Complementary conductive threads provide a minimization of electrical resistance and reduce the risk of ripping of the very same threads that are integrated into the structure for connecting the electronic components. In addition, we found that this fabric is capable of minimizing signal attenuation and of transmitting at least 80% of original human movement data. Following through the previous step, we conducted a silicone coating treatment on the fabric’s surface in order to protect the electronic components from the oxidation potential due to sweat and to strengthen the adhesion of integrated sensors when washed, while maintaining the quality of the measured physiological signals. As regards the garment’s design, the patterns should be generated using a three-dimensional measuring cabin, so that the finished garment can adapt to the wearer’s morphology with a minimum gap and there is a desirable pressure between the textile surface and the skin (small negative ease allowance values) (see Figure 9.3).

Figure 9.3. Design of a “second skin” intelligent garment

In order to integrate the previously selected sensors into the designed garment, we carefully identified their positions so that the corresponding signals are the most reliable and as authentic as possible. On the garment designed for firefighters, we embroidered the accelerometers at the identified positions of the chest and aortic areas in order to detect the heartbeat and respiratory cycle. Following the previous stages, we obtained the final intelligent garment prototype (see Figure 9.4).

294

Automation Challenges of Socio-technical Systems

Figure 9.4. The intelligent garment prototype

9.3. Physiological signal processing In the proposed intelligent garment, from the signals measured by the selected accelerometers, we can extract a number of important physiological characteristics, such as respiratory waveforms, heart rhythm and their variations. Nevertheless, signal processing is quite complex in this context due to the existence of a large amount of noises related to body movements of the wearer during his/her activities and an attenuation of the signal related to the flexible textile support by nature (movement, extension and distortion of the fabric). 9.3.1. Extraction of respiratory waveforms In most conventional physiological processing methods, the waveform can be detected through the use of a fixed band-pass filter, defined within the [0, 1] Hz interval, in order to reinforce the measured signal. In the case of deep breathing, we can observe that the signal is flooded with noises after the application of such band-pass filters. In our approach, we use an existing algorithm that automatically adapts the frequency filter so as to optimize the SNR (signal/noise ratio) [PHA 08]. This algorithm calculates the respiratory rate as follows:

Development of an Intelligent Garment for Crisis Management

295

– Step 1: dividing the original signal into a number of 15-second segments with an overlap of three seconds. These parameters are chosen to rapidly adapt to the different heart rhythms of the wearer, throughout different activities; – Step 2: calculating the spectrum of the accelerometer and detecting the dominant frequency (f0) along interval [0.1, 1] Hz. This selection is based on the fact that for humans, the number of respiratory cycles per minute oscillates between 6 and 60. The dominant frequency f0 is calculated according to the maximum energy of the Fourier transform corresponding to the original signal; – Step 3: apply a band-pass filter to the accelerometer signal around f0 using the fourth-order Butterworth filter [f1, f2]. The bandwidth is defined around f0 and is conditioned by the following rules: f1 = max {0.1, f0−0.4} Hz; f2 = f0 + 0.4 Hz

Figure 9.5. Original signal and its extracted characteristics (heartbeat in red and respiratory cycle in green). For a color version of this figure, see: www.iste.co.uk/vanderhaegen/automation.zip

9.3.2. Automatic heart rate detection An accelerometer has high accuracy and is able to acquire vibrations of the heart muscle. However, the shape of vibrations depends on the individual and varies with time. In order to obtain the heart rate, we developed an algorithm for detecting the maximum of the signal envelope. The first step of the algorithm is to apply a band-pass filter (usually the fourth-order Butterworth filter), so that the noise

296

Automation Challenges of Socio-technical Systems

covered by the frequency band obtained is minimized, while the relevant information of the signal related to the heart rate remains well preserved. In our study, this frequency band ([20, 35] Hz) was chosen to eliminate the baseline (low frequencies corresponding to human movements) and the majority of high-frequency noises, while preserving cardiac vibration. The second step makes it possible to detect the global envelope of the signal. For this, the Hilbert transform is used to identify the whole of the local maxima. Then, we identified the peaks of local maximum gravity by calculating the dominant frequency in the spectrum of the accelerometer, that is, interval [0.16, 3] Hz. This interval was determined by the fact that the number of heartbeats per minute for all individuals oscillates between 10 and 180. An example of automatic peak detection is provided in Figure 9.6.

Figure 9.6. An example of detection of gravity peaks for a period of 20 seconds

After the detection of peaks, we analyzed their variations in order to evaluate the wearer’s health condition. In particular, heart rate variability is often considered as the most important indicator of the autonomic nervous system, which is related to cardiovascular mortality. A high value of HRV makes it easier to transmit information to the heart, leading to a good adaptation to the external environment and physical and psychic conditions. Thus, people with high HRV values correspond to a good vital condition. On the other hand, a heart that reacts badly to the internal and external conditions of the wearer represents a real mortal danger [LAH 08].

Development of an Intelligent Garment for Crisis Management

297

9.3.3. Heart rate variability Heart rate variability refers to the perpetual fluctuations in the rhythm of the heart rate in comparison with its average frequency. The analysis of the HRV allows an indirect study of the tone of the autonomic nervous system (ANS) through its effects on the heart. HRV can be considered as a witness to the transfer of information between the SNA and the heart. Thus, a high HRV allows a better transmission of information to the heart, which endows it with a better adaptation to the external environment and the psychic condition. Therefore, people with high HRV values are in good functional vitality because their ANS is in good balance. In contrast, a heart poorly responding to internal and external factors and whose beats are perfectly regular does not reflect essential information related to environmental changes and is in mortal danger. There are different methods for measuring HRV. The temporal and geometric methods are preferred in this case because they are recommended for long-term measurement (several hours). The indices of HRV that we can record inform us, for instance, about the level of physical performance or fatigue, which, when combined, makes it possible to establish the operability of the agent carrying the garment. 9.3.4. Analysis of experimental results In order to validate the relevance of our measurements, here we report the experimental results obtained with a group of seven people wearing the developed intelligent garment prototype. These people included three women and four men between 25 and 45 years of age, all in good health and having regular physical activities. For each person, the experiment was repeated 10 times and lasted between one and three hours. Figures 9.7, 9.8 and 9.9 illustrate the good results of the HRV parameters throughout the performance assessment of carriers for different lengths. Figure 9.8 shows that when the wearer performs physical activity, the performance represented by SDANN first increases during the warm-up and then decreases for the remaining time due to the fatigue which gradually sets in. Furthermore, a measurement during a 30-minute break (see Figure 9.9) was used for checking whether the HRV significantly fluctuated in relation to physical activities. This is our calibration basis for the detection of the individual operating threshold.

298

Automation Challenges of Socio-technical Systems

Figure 9.7. RMSSD: root mean square of the successive differences, which means the quadratic mean of the successive differences in heart rate: indicator represents fatigue. In this case, it was measured during the wearer’s physical tests on separate dates

Figure 9.8. SDANN (standard deviation of the 5-minute average NN intervals): standard deviation of a 5-minute acquisition, an indicator of physical performance during activity periods

Development of an Intelligent Garment for Crisis Management

299

Figure 9.9. SDANN: acquisition of performance data during the physical resting position

We made a comparison with an ECG® Polar H7 chest belt type over a period of one minute. ECGs are often used as a standard technology for evaluating heart rate activities. Experimental results have shown an identical average frequency by smart clothing incorporating acceleration and ECG measurements, which means that these two technologies are totally compatible. Throughout the different experiments conducted using the smart garment we designed, we obtained similar results for cardiac activities, which enabled us to conclude that the suggested signal processing method is robust. However, its current limitation is still the lack of full medical interpretation of the measured data. 9.4. Firefighter–robot cooperation, using intelligent clothing Intelligent clothing is a vector that can be particularly interesting to facilitate the cooperation between human and aid systems such as robots. Here we focused on the context of crisis management and more specifically on the possibility of encouraging the cooperation between firefighters deployed in a hazardous area and robots equipped with sensors featuring complementary perceptive abilities to those of human beings. The design of robot capabilities and the supervision/control interface introduced in this section is based on research related to human–machine

300

Automation Challenges of Socio-technical Systems

cooperation, which has been conducted for more than 20 years now at LAMIH, at the University of Valenciennes and Hainaut-Cambresis. Human–machine cooperation is articulated today around several key definitions, which we will not introduce here, but which readers may explore in detail in [PAC 14, PAC 15a, PAC 16]. In our study, we are working on a cooperation scenario between firefighters and robots for managing a fire event. This scenario was built with the help of Commander Eric Mareschi and Commander Laurent Foucrier from the Northern Departmental Fire Department [PAC 15b]. Our study has focused on the cooperation between two levels of activity: the operational and tactical levels [PAC 17]. The tactical level is managed by officers in charge of supervising and controlling the “human and technical” means related to the rank deployed at the operational level. The tactical level activity is mainly held at the command position vehicle (CPV), set up near the crisis area. In our scenario, we envisaged the function of a new officer who would be in charge of the supervision and control of a fleet of robots, and robots might have different automation levels [HAB 17a]. The activity at the operational level is organized around human resources and techniques (specific vehicles and robots), that is, the application of the tactical level orders “rank” firefighters deployed on the field and responsible for controlling the incident and its consequences. An intelligent garment is devoted to this firefighter category. In our study, at the operational level, we focused on the cooperation between a firefighter and a fleet of robots capable of guiding them along a safe trajectory. Cooperation between the tactical and operational levels mainly concerns the management of the automation levels of robots based on their capabilities and depending on the difficulty of the environment (the presence of more or less obstacles), as well as collecting information about the firefighter’s health condition by means of the intelligent garment. The scenario is as follows. In a space where human visibility is very low due to smoke, we want to use several robots to guide a firefighter in order to reach a target area, by following an optimal path in terms of safety and length. In this target area, the firefighter will perform tasks that cannot be done by conventional robots (e.g. closing a valve). In our context, the robots are informed by the tactical level and by their own sensors about the evolution of the fire and the geographical positions and movements of the objects (humans, robots, dangerous places, etc.). The cooperation platform between robots and firefighters is made up as follows: a human supervisor simulating an officer in the command position vehicle, who is in permanent remote communication with the fire event through an interface (cooperation at the tactical level); a space simulating the management of the fire, where there is a firefighter wearing an intelligent garment; and four robots equipped with a graphic interface providing simple visual aids (color code, common symbols) to guide firefighters via secure paths (cooperation at the operational level).

Development of an Intelligent Garment for Crisis Management

301

The intelligent garment, robots and the supervision interface are all interconnected through wireless communication using XBEE IEEE 802.15.4 technology. This technology consumes less energy and costs less, but has a lower data transmission speed. 9.4.1. Robots As part of the laboratory experiment we wish to conduct, each robot is a Lego Mindstorms NXT (see Figure 9.10). It has a limited computing capacity, equipped with a 32-bit CPU (ARM CPU, clock at 48 MHz), aided by an ATmega coprocessor (8 MHz) and two memories. It uses four sensor ports (ultrasound, gyroscope, touch, and XBEE communication) and three motor ports in order to ensure its mobility and to control the position of an ultrasonic sensor. The robot is also equipped with a smartphone for transmitting its visual and sound environment to the remote human supervisor. The smartphone communicates with the robot via Bluetooth and with the supervision interface through a Wi-Fi connection. An optimization of the algorithms is necessary to compensate for the limited capacity of the robot and its low-performance sensors.

Figure 9.10. A Mindstorms NXT robot with a smartphone interface

The robot is controlled by a multitasking program written in C, within the robotC environment. A predictive control model adapted to this robot [HAB 17a] was modified in order to achieve the autonomous control of unforeseen obstacles. In addition, another order level was incorporated in order to balance the workloads of the supervisor and robots using the KH model (which concerns the ability of an agent to perform an individual task) and the KHC model (which concerns the agent’s ability to cooperate with other human or technical agents) [HAB 17b]. The smartphone interface is dedicated to the interaction between the robot, the fireman and the remote

302

Automation Challenges of Socio-technical Systems

supervisor. In our experiments, the color of the background and the specific icons are displayed on the smartphone according to the analysis of the environment performed by the robot, focusing on obstacle detection and its subsequent change in direction so as to avoid the obstacle. 9.4.2. Human supervisor interface The human supervisor interface is provided at the tactical level during crisis management. On this interface, information stemming from the operational level and from the fire environment is displayed. In Figure 9.11, precise data regarding the robots and the firefighter/intelligent clothing, including their geographical positions and their movements, are displayed, respectively, to the left and right of the interface.

Figure 9.11. Human supervisor interface: supervision and control robots

When a robot is chosen, the human supervisor can control its automation level and some of its smartphone functions (power, flash, front and rear cameras, audio feedback). When a firefighter wearing an intelligent garment is chosen, the HRV information will be displayed on the interface (right side under the garment wearer’s photo), so that the human supervisor can send him/her an alert through the LED lamp attached to the garment, in case of need.

Development of an Intelligent Garment for Crisis Management

303

Figure 9.12. Human supervisor interface: supervision of the hostile environment. For a color version of this figure, see: www.iste.co.uk/vanderhaegen/automation.zip

Experiments will be soon conducted in the premises of LAMIH, during which a crisis situation will be simulated. The crossing of experimental conditions will be defined depending on the robot’s automation levels and the ability of the intelligent garment to provide reliable information about the health condition of the wearer to the tactical supervisor firefighter and the robots evolving nearby. The analysis of objective data mainly based on the level of performance achieved throughout the mission (execution time, decision-making and actions, number of errors made in the choice of the trajectory, etc.) and subjective data based on the qualitative analysis of the activity resulting from the coding of the activity and the answers to the questionnaires will make it possible to highlight the most widespread and efficient types of cooperation between firefighters and robots. 9.5. Conclusion In this chapter, we introduced some of the research results produced by the SUCRé project (human–robot cooperation at a hostile environment), funded by the ARCir program in the Hauts-de-France Region. An intelligent garment was designed for firefighters to ensure their protection and optimize their operations during fire management. Two particular functions were taken into account for the design of this garment: detection and analysis of physiological signals in order to monitor online the firefighter’s state of well-being; the communication and cooperation with robots and

304

Automation Challenges of Socio-technical Systems

the command position vehicle in order to identify the optimized trajectory during the fire operation. Online monitoring of the state of well-being is ensured by integrating accelerometer-type sensors into the garment. The comfort of the wearer and reliability of measured signals were taken into account during the design process. An analysis of measured physiological data is performed within the microcontroller of the garment in order to predict cardiac activities associated with stress and fatigue states. The intelligent garment previously designed ensures the cooperation between firefighters and robots at the operational and tactical levels. At the tactical level, the human supervisor at the command position vehicle receives all the data from robots and firefighters’ clothing in situ, in order to perceive the overall situation about the fire and the status of the deployed humans, and is then able to make a relevant decision. At the operational level, cooperation between a firefighter and a fleet of robots is achieved using intelligent clothing in order to make him/her follow an optimized trajectory, as far as safety and efficiency are concerned. 9.6. References [CHO 10] CHO G., Smart Clothing: Technology and Applications, CRC Press, Boca Raton, 2010. [DAR 11] DARWISH A., HASSANIEN A.-E., “Wearable and implantable wireless sensor network solutions for healthcare monitoring”, Sensors, vol. 11, no. 6, pp. 5561–5595, 2011. [GAT 07] GATZOULIS L., IAKOVIDIS I., “Wearable and portable ehealth systems”, IEEE Engineering in Medicine and Biology Magazine, vol. 26, no. 5, pp. 51–56, 2007. [HAB 17a] HABIB L., PACAUX-LEMOINE M.P., MILLOT P., “Adaptation of the level of automation according to the type of cooperative partner”, IEEE International Conference on Systems, Man, and Cybernetics, Banff, Canada, October 2017. [HAB 17b] HABIB L., PACAUX-LEMOINE M.P., MILLOT P., “A method for designing levels of automation based on a human-machine cooperation model”, IFAC World Congress, Toulouse, France, July 2017. [JOV 03] JOVANOV E., LORDS A.O., RASKOVIC D. et al., “Stress monitoring using a distributed wireless intelligent sensor system”, IEEE Engineering in Medicine and Biology Magazine, vol. 22, no. 3, pp. 49–55, May/June 2003. [LAH 08] LAHIRI M.K., KANNANKERIL P.J., GOLDBERGER J.J., “Assessment of autonomic function in cardiovascular disease: physiological basis and pronostic implications”, Journal of the American College of Cardiology, vol. 51, no. 18, pp. 1725–1733, May 2008.

Development of an Intelligent Garment for Crisis Management

305

[PAC 14] PACAUX-LEMOINE M.P., “Human-machine cooperation principles to support life critical systems management”, in MILLOT P. (ed.), Risk Management in Life Critical Systems, ISTE, London and John Wiley, New York, pp. 253–277, 2014. [PAC 15a] PACAUX-LEMOINE M.P., ITOH M., “Towards vertical and horizontal extension of shared control concept”, IEEE International Conference on Systems, Man, and Cybernetics, Hong Kong, October 2015. [PAC 15b] PACAUX-LEMOINE M.P., MARESCHI E., “Individual and collective adaptation to situation emergency and complexity: a human-machine cooperation approach”, Summer School on Risk Management, a Human Centered Approach, UVHC, Valenciennes, France, July 2015. [PAC 16] PACAUX-LEMOINE M.P., FLEMISCH F., “Layers of shared and cooperative control, assistance and automation”, IFAC Analysis, Design and Evaluation of Human-Machine Systems, Kyoto, Japan, August 2016. [PAC 17] PACAUX-LEMOINE M.P., TARTARE G., HABIB L. et al., “Human-robots cooperation through intelligent garment”, IEEE International Symposium on Industrial Electronics, Edinburgh, June 2017. [PHA 08] PHAN D.H., BONNET S., GUILLEMAUD R. et al., “Estimation of respiratory waveform and heart rate using an accelerometer”, Proceedings of the 30th IEEE Conference on Engineering in Medicine and Biology Society, Vancouver, pp. 4916–4919, 20–25 August 2008. [SCH 15] SCHMITT L., REGNARD J., MILLET G.P., “Monitoring fatigue status with HRV measures in elite athletes: an avenue beyond RMSSD?”, Frontiers in Physiology, vol. 6, November 2015. [SUH 10] SUH M., CAROLL K., CASSILL N., “Critical review on smart clothing product development”, Journal of Textile and Apparel Technology and Management, vol. 6, no. 4, pp. 1–18, 2010.

10 Active Pedagogy for Innovation in Transport

10.1. Introduction Classical pedagogy places the teacher in front of students whose knowledge levels may be heterogeneous. In this configuration, some of them remain passive and follow the course without really interacting with the teacher. In order to overcome these setbacks, methods for active pedagogy have been created so as to engage learners in their learning process, at their own pace. These methods include, for example, reverse pedagogy [GUI 17] and problem learning [GOO 05]. Reverse pedagogy involves solving problems through support by courses or through information provided by the teacher on demand. Learning through problems or through projects enables students to process and solve fictional problems or scenarios, thus putting their knowledge into practice and progressively completing the learning process. These techniques can be carried out remotely or face-to-face with the teacher, who can be solicited by the students so as to facilitate or to guide their work. These techniques encourage students to become actors of their learning process [LEB 07, WAL 17]. In this pedagogical framework, metaphors are often employed for making understanding easier or for illustrating the use of a concept [LYN 17, THI 17, DRO 18]. Sometimes, visual aids make it possible to establish a direct association between an icon or a drawing and the problem at hand. In augmented virtual reality, the metaphor for designing decision support systems based on an enhanced dynamic view helps the user to naturally understand the purpose or interest of such assistance [GEO 11, PHA 16]. Two metaphors are used in the final examples Chapter written by Frédéric VANDERHAEGEN.

Automation Challenges of Socio-technical Systems, First Edition. Edited by Frédéric Vanderhaegen, Choubeila Maaoui, Mohamed Sallak and Denis Berdjag. © ISTE Ltd 2019. Published by ISTE Ltd and John Wiley & Sons, Inc.

308

Automation Challenges of Socio-technical Systems

of active pedagogy presented in this chapter: the mirror effect, for illustrating the implementation of the principle of human-assisted automation as an optimal support for eco-driving; and the flying carpet, as a means for designing innovative transport. This chapter introduces the results of active pedagogy examples related to learning through problems or through projects, and is dedicated to the innovation of transport systems. The first two examples concern risk and accident analysis through the use of support systems for improving design tools destined to the prevention, recovery or restriction of adverse events. Such active pedagogy modules were implemented in two master’s degree classes from the University of Valenciennes between 2016 and 2018. Teaching was based on the distance learning modules developed for the “Railway Engineering and Guided Systems” specialization in Valenciennes, within the framework of the UTOP project (Université de technologie ouverte pluripartenaire) belonging to the IDEFI program (Initiatives d’excellences en formations innovantes). The third and fourth examples incorporate elements from the first two studies and also implement digital tools for simulating scenarios within the framework of a collaborative project involving five students who followed the first year of the master’s degree in 2017. The last example proposes a global approach for the innovative design of transport systems. 10.2. Analysis of a railway accident and system design The problem was based on an accident report from the Accident Investigation Bureau [BEA 05]. The facts were presented to the student groups as follows: “After leaving the warehouse where he had spent the night and loaded his trailer, a truck driver began his 200-km non-stop journey. Unfortunately, the truck containing gas cylinders broke down and remained stuck on a railway track, since the braking system was faulty. This road used to be frequented by heavy trucks. The driver in the blocked truck asked for help from another driver who suggested making a phone call. A few seconds before the closing of the level crossing (LC) announcing the arrival of a train, he tried to communicate with the control center (CC) by using the intended lane phone. During the communication that was very short (six seconds), interference made it impossible for the CC staff to understand the situation, and consequently the CC staff did not sound the alarm. A TERYY train coming out of a curved track section was approaching the LC along a straight line track at 50 km/h. As he perceived the lorry stuck on the lane, the driver of the TERYY immediately triggered the emergency braking and transmitted the warning light (WL) and the radio warning (RW), inviting all other train drivers in the area to

Active Pedagogy for Innovation in Transport

309

instantly stop due to force majeure. He managed to stop his train before the LC. There, he found that the blocked truck did not hinder the passage of his TERYY, but that of the TERXX which was journeying at 100 km/h on a parallel track. When he discovered the TERXX collision with the truck, he warned the CC by radio. The driver of the TERXX also saw the truck stuck on his path at the LC. His train was rolling at 100 km/h when he crossed the TERYY. He then activated the emergency stop button and ordered the lowering of the pantographs. He did not issue the RW because it has just been triggered by the TERYY he had overtaken and which had stopped before the LC. Just before the impact with the truck, the driver left his cabin and entered the passenger car. He warned the travelers about the imminent shock with the truck and asked them to get back to their seats, to remain seated, to hold themselves strongly to the seats and to wait for instructions. The shock took place at 40 km/h and the train pushed the immobilized truck a few meters before stopping”. This scenario was analyzed by groups of three to five people. The exercise aimed to identify the human, technical and organizational factors involved throughout the course of the accident, setting up appropriate fault trees in order to provide a generic representation of the accident and the post-accident and suggesting technical recommendations. The search for information or documentation regarding the different elements of this problem could be carried out in a library, on the Internet or through interaction with the teacher. Articles on the evaluation of human errors, risk analysis and the modeling of human behavior were made available for consultation in case of need [VAN 99a, VAN 03, QIU 17, RAN 17, VAN 17a]. At the end of the module, the groups made a presentation of their results and the professor offered a synthesis highlighting the theoretical aspects that should be retained. The removal of the LC through a tunnel or a bridge, the earlier lowering of the barriers (i.e. the lengthening of the release time) or the evacuation of passengers via the train’s rear were examples of organizational recommendations. Regarding technical recommendations, proposals for innovations on how to deal with such an accident were numerous. For example, students focused on how to avoid the truck, divert the train or reduce the risk of explosion and potential fires following the collision. The following proposals were made: – proposal of a system for emptying or evacuating the truck’s tank. The aim was to control one of the causes of a potential post-accident, that is, the fire; – proposal regarding a fire insulating system on the train or on the truck. Fire confinement could rely on specific materials or foams, for example;

310

Automation Challenges of Socio-technical Systems

– proposal regarding a vehicle immobilization detector at the railway crossing. This system could be based on truck or car weighing tools (e.g. weighbridges or vehicle scales); – proposal for a motion detector by cameras. Progress in the field of image analysis has made it possible to specify an immobilized vehicle detection system at an LC thanks to the use of cameras; – proposal of a track occupancy detection system by drones. This system is based on the previous one, but instead of implementing fixed cameras on the railway site, these are mobile cameras, embedded on drones. The supervision of the traffic flow could thus be organized with the support of these devices in order to detect incidents on the LCs or along the tracks; – proposal of a treadmill for evacuating immobilized vehicles. This proposal studied several evacuation configurations based on the feasibility of integrating a treadmill across railways; – proposal of an evacuation winch for immobilized vehicles. This system was inspired by container landing machines on the docks of harbors; – proposal of LC upstream and downstream clearance tracks in case of collision. Depending by train speed and time constraints, the drivers could manipulate switches from their cab to deflect their train when approaching the LC. At the end of the module, a questionnaire was offered to the students in order to assess the interest of this work (Table 10.1). Level of certainty

Level of certainty

Has the study of this problem enabled you to:

Yes

– be autonomous?

27

20

7

0

4

2

2

0

9

– be free?

24

21

3

0

9

4

4

1

7

– learn easily?

40

39

1

0

0

0

0

0

0

– share knowledge?

31

20

11

0

6

4

2

0

3

– understand more easily?

34

25

8

1

3

1

2

0

3

Did you enjoy the progression of this course?

34

28

6

0

1

1

0

0

5

No High Medium Low

High Medium Low

No opinion

Active Pedagogy for Innovation in Transport

311

Would you rather have the course material before studying the problem?

19

17

2

0

14

11

3

0

7

Did you enjoy working in a group?

37

29

7

1

0

0

0

0

3

Did you find this course useless?

4

4

0

0

33

30

3

0

3

Table 10.1. Results of the individual evaluation of the module

A total of 40 answers were collected. For a large number of students, the way in which this module was managed smoothed individual learning and favored a better understanding of the problem and the sharing of knowledge. This learning mode was considered useful. Its unfolding and the teamwork were also appreciated. For a smaller majority of students, the study of the problem enabled them to be autonomous and free when choosing which approach to follow. As regards the availability of course materials before the start of the module, the answers were divided, revealing a disparity between the choices and the preferences of each individual.

10.3. Analysis of use of a cruise control system This module was divided into three sections: a collective reflection in small groups of three to five people, an individual section based on the same work support and another group activity for working on the specification of automatic detection and simulation tools for identifying intention conflicts (or dissonances) from basic rules. The first section was to prompt reflection concerning the rules of use and functioning of the cruise control (CC) system, the rules of the manual vehicle’s speed control, the rules of an aquaplaning manual control and the rules of the manual control for optimizing fuel consumption vehicles with a combustion engine. Table 10.2 is an example illustrating the production of these rules. Groups could get documentation about the CC functioning modes regarding speed control in relation to a given instruction managed by the driver. The first section introduced the principles of group knowledge representation. The second section relied on basic common rules (Table 10.2).

312

Automation Challenges of Socio-technical Systems

BR1: use of cruise control system (CC) by the driver R1: activate the CC  (press, activated “ON” button, driver) R2: disable the CC  (brake, pressed brake pedal, driver) R3: disable the CC  (press, activated “OFF” button, driver) R4: disable the CC  (disengage, pressed clutch pedal, driver) R5: increase the speed setpoint of the active CC and while operating  (press, pressed “+” button, driver) R6: reduce the speed setpoint of the active CC and while operating  (press, pressed “-” button, driver) BR2: active SC functioning R7: actual speed > setpoint speed  (brake, reduced engine speed, CC) R8: actual speed < setpoint speed  (accelerate, increased engine speed, CC) BR3: aquaplaning control by a driver R9: controlling aquaplaning  (do not brake, inactive brake pedal, driver) R10: controlling aquaplaning  (do not accelerate, inactive accelerator pedal, driver) BR4: speed manual control R11: increasing speed  (press, pressed accelerator pedal, driver) R12: reducing speed  (release, released accelerator pedal, driver) BR5: control of vehicle consumption by a driver R13: use the force of inertia downhill  (do not brake, inactive brake pedal, driver) R14: use the force of inertia downhill  (do not accelerate, released accelerator pedal, driver) R15: use the kinetic force uphill  (do not brake, inactive brake pedal, driver) Table 10.2. Example of basic rules

These rules were trivial and implemented behavioral models associated with intentions, integrating a predicate and a conclusion following three parameters: the action to be performed, the object associated with the action and the actor. Two actors were identified: CC and the driver. The concept of dissonances introduced in [VAN 14a] was outlined and students were encouraged to identify them among the suggested rules. In order to help students during the exercise, a list of dissonances (i.e., A1, A2, C1, I1, I2, I3 and I4) based on those identified in [VAN 17a, VAN 17b] was suggested in order to continue the module (Table 10.3).

Active Pedagogy for Innovation in Transport

What is your level of certainty for your answer?

Do you agree with the proposal?

Proposal

Totally Mostly agree yes

Yes and no

313

Mostly Do not High Medium Low no agree lv. lv. lv.

No opinion

BR1

7

21

2

1

0

8

22

1

2

BR2

8

16

5

0

0

12

16

1

4

BR3

10

19

2

2

0

11

21

1

0

BR4

14

15

4

1

0

20

12

0

0

BR5

12

18

2

0

0

18

14

0

1

Global

8

21

2

2

0

9

24

0

0

A1

14

13

4

1

1

18

13

2

0

A2

15

12

4

0

2

21

11

1

0

C1

16

11

4

2

0

19

12

2

0

I1

15

10

4

1

1

14

16

1

2

I2

11

12

1

4

4

16

14

2

1

I3

16

12

2

1

1

15

15

0

2

I4

13

10

3

1

4

16

14

1

2

Table 10.3. Validation example of rules and dissonances using CC

Two affordances A1 and A2, a contradiction involving the same actor and four interferences involving two different actors were suggested for evaluation. The A1 and A2 affordances referred to the abandonment of the CC “+” and “-” buttons for increasing or decreasing the speed setpoint except for increasing and decreasing the current speed. Thus, the CC can be used as an acceleration and deceleration system. Contradiction C1 was a conflict between rules R2 and R9 for which the driver could choose between two opposing actions: to brake or not to brake under particular circumstances of an aquaplaning occurrence and the intention to disable the CC, for example. Interference involved opposite actions between CC and the driver. Thus, I1 corresponded to a potential conflict between rules R8 and R10. The effect of aquaplaning could make the CC system miss the speed measurement, which may decide to accelerate if the current speed was considered to be lower than the setpoint speed. However, rule R10 specified not accelerating as a means for controlling aquaplaning. Interferences I2, I3 and I4 were related to behavior associated with uphill or downhill routes. Interferences appeared between CC actions and the driver’s intentions between rules R7 and R13 (i.e. I2), R8 and R14 (i.e. I3), and R7 and R15 (meaning I4).

314

Automation Challenges of Socio-technical Systems

Overall, the 33 surveyed master’s degree students fully agreed or mostly agreed with all the suggested dissonances, showing high or average certainty levels in their answers. The last section involved becoming acquainted with the knowledge about deductive, inductive and abductive human reasoning and adapting these to the implementation of automatic dissonance detection tools stemming from basic rules. Students could rely on the specifications given in [JOU 03, VAN 04, VAN 16a]. The implementation of these systems made it possible to automatically identify the contradictions or interferences in Table 10.3 or to determine new ones. For example, the same reasoning associated with interference I1 persisted when the current speed, which was initially equal to the setpoint speed, increased while aquaplaning. In fact, the CC system might decide to brake the engine speed by applying rule R7, whereas rule R9 did not require braking. Dissonance simulation deduced from dynamic models and vehicle adhesion, under aquaplaning and CC activation conditions for example, could not be performed due to shortage of time. Therefore, the module had to be adjusted in order to implement this section. However, the module made students aware of the uses of automated systems such as CC and of how to determine appropriate specifications for implementing new technical systems capable of acknowledging the risks of common dissonances. In addition, this module made it possible to implement practical knowledge regarding the development of dissonance diagnosis systems and their associated risks. 10.4. Simulation of a collision avoidance system use In this exercise, the aim was to develop simulation media for detecting the risks associated with specific situations. A group of five master’s degree students had been working on this project since 2017 [HAM 18]. It was inspired in the simulation modules implemented on the MissRail® platform (a multi-function, multiuser and multimodal platform designed for railway training and research) [VAN 14b] and COR&GEST cyber-physical COR&GEST medium (rail driving and railway supervision) [VAN 12], developed at the University of Valenciennes. In this way, starting from the example in Figure 10.1, risk analysis was to be carried out, following the basic traffic regulations. Field observation was easily achievable since this junction is placed near the University of Valenciennes. Based on the previous problem about dissonance detection of operational rules and system use, the analysis involved identifying the potential risks associated with predefined scenarios. Once these risks were identified, they had to be analyzed following the parameters of the MissRail®

Active Pedagogy for Innovation in Transport

315

platform of the University of Valenciennes. For example, Figure 10.2 illustrates a simulation example obtained thanks to this platform. It shows the risk of collision between a vehicle and a tram after having respected the green light.

Figure 10.1. Real scenario case study

Figure 10.2. Simulation case study using MissRail®. For a color version of this figure, see: www.iste.co.uk/vanderhaegen/automation.zip

316

Automation Challenges of Socio-technical Systems

Traffic density forced the driver to stop in order to avoid a frontal collision with another vehicle. Therefore, the stop was made on the tramway’s tracks. Depending on the waiting time, the traffic light turns red, announcing the imminent arrival of a tram. A collision was possible between the immobilized vehicle on the rails and the tramway. Another analysis was elicited regarding a vehicle equipped with an ACC-type collision avoidance system in the same circumstances. Simulation displayed the same potential collision scenario. Thus, this exercise made it possible to sensitize students about the design of media for anticipating potential risks in a normal situation while complying with driving regulations (e.g. complying with the green light) and for designing tools for helping drivers acknowledge such constraints.

10.5. Eco-driving assistance The results of this apprenticeship training module through projects were detailed in [HOM 18]. This exercise was integrated into the project of the group of five students discussed in the previous section. It was inspired by the concept of mirror learning, developed in [VAN 16b] for specifying and testing a railway eco-driving system. Unlike computer-assisted human activities, this medium makes it possible to implement human-assisted automation, introduced in [VAN 17b], applying the mirror effect metaphor to the learning process. This represents a new principle of augmented automation complementing technical support knowledge by taking into account the real human activities. Learning algorithm was suggested to the students in order to establish a first state of the art and to determine the algorithms that should be retained or adapted [POL 12, ENJ 17]. A serious game-type simulator was developed and two mirror effects were suggested: the selective mirror, in order to identify what should be kept as knowledge for optimizing energy consumption in real time, and the deforming mirror, so as to deform the common knowledge concerning predefined thresholds after each simulation, so as to discover and retain potential new local optima (Figure 10.3). An interface, that is, a keyboard, a control box, a pedal, etc., helped the driver to position the manipulator in order to tow or to brake the train. The desired position of the manipulator was also indicated as advice to follow, as well as the actual position of the manipulator, which was supposed to match the setpoint, to ensure the optimization of power consumption.

Active Pedagogy for Innovation in Transport

317

Figure 10.3. Serious game for rail eco-driving tasks

Safety regarding the respect of speed limitation and acceleration and deceleration minimums was considered a priority. Train timetable respect was integrated into the knowledge optimization process concerning significant delays. Thus, during the first experiment by driver 1, no instructions were available, and future instructions were progressively calculated and optimized following subsequent experiments done by the other drivers (Figure 10.4). Six subjects tested the platform. The first one had no instructions, but his behavior made it possible to initialize the instructions for the following experiment. The overall consumption of the second to the sixth subject could thus be improved by taking into account some of the optimal behaviors of each of them. Therefore, for these last five subjects, the middle histogram shows actual consumption. Consumption associated with the advice to be followed during the experiment is shown on the histogram on the left. The histogram on the right corresponds to the new instructions identified by the mirror system. The right histogram of an N subject corresponds to the left histogram of subject N + 1. For subject No. 6, actual consumption was important because the subject rarely respected the suggested advice, which hindered optimizing the initial instructions to significantly improve consumption for upcoming eco-driving system uses.

318

Automation Challenges of Socio-technical Systems

Figure 10.4. Consumption results of the eco-driving system based on mirror learning. For a color version of this figure, see: www.iste.co.uk/vanderhaegen/automation.zip

The perspectives of this work aimed to integrate the mirror effect learning module within a complete network involving several trains. The goal would then be to optimize overall power consumption by modulating speed instructions in real time and ensuring that schedules and passenger comfort are respected. The MissRail® platform [VAN 14b] allows for such a configuration of several trains on the same network, generating driving tasks in automated mode or in manual mode.

Active Pedagogy for Innovation in Transport

319

10.6. Towards support for the innovative design of transport systems New design ideas for transport systems such as flying cars [DUC 18, FON 18], supersonic trains [ELH 17, RAY 18] or futuristic bicycles [HER 16, CAC 18] have demonstrated recent technological breakthroughs in the world of transport. In return, autonomous car accidents have reminded us about the important role of human factors in transport safety [GAV 17, LEC 18, ROZ 18]. Finally, in the face of the numerous controversies regarding the study of human factors [VAN 18a], this module proposed a new global approach for the proposal of an innovative transport system, abstaining from technical constraints. This aim is currently being achieved and intends to raise the awareness of learners in search for innovation in transport, with the possibility of conceiving the unthinkable or the impossible. It appeals to the imagination and creativity of students and is currently offering two levels of study. The first one concerns the proposal of automated systems for assessing the cognitive state of a human operator. The second one refers to the proposal of an assisted levitation system.

Figure 10.5. Example of parameters for innovative design

Figure 10.5 shows examples of parameters that can be taken into account for the design of transport systems: – the materials used for the transport system; – the laws of physics relating to different classical variables such as temperature or pressure; – smart technologies; – propulsion methods;

320

Automation Challenges of Socio-technical Systems

– the behaviors of the gases used or produced; – human characteristics. In the context of the introduction of autonomous vehicles, it is often desirable for the driver to remain vigilant and attentive, in order to be able to identify all the possible drifts of automated systems. Different media can then be developed while activating support systems based on a calculated or subjective estimate of the human workload, attention or vigilance, for example. Based on bibliographic references made available to the students, a feasibility study is required for specifying such automated evaluation systems. For example, the idea was to develop a load estimator based on the functional or temporal requirements of tasks to be carried out [VAN 94, VAN 99b], an attention estimator based on the synchronization of events with heart rate [SAL 16, VAN 19] or an intention estimator based on water crystals placed near human operators [RAD 06]. The algorithms for real-time estimation of task requirements are offered to the students. Students could implement or adapt them and could validate them starting from an experimental protocol involving several subjects within a vehicle driving situation. As regards the attention estimator, students have to conceive a detection system for synchronizing the occurrence of dynamic events with the heart rate and then have to study the impact of such synchronization on the risk of making perceptual errors. Finally, the last example involving the estimation of intention was based on an original and surprising hypothesis, studied in [RAD 06]: it might be possible to determine the positive or negative intentions of a human being from the analysis of distilled water crystals placed nearby. For positive intentions associated with emotions like joy or good mood, the frozen and electronically enlarged water drops show structured geometric shapes, whereas in the presence of negative intentions associated with emotions such as anger or hate, crystals are destructured. As barometric statuettes that change color regarding the weather, this exercise aims to establish the functional specifications of a real-time system capable of determining the positive or negative state of surrounding intentions thanks to the mechanism of water crystal analysis presented in [RAD 06]. The second level of study intends to apply the metaphor of the flying carpet for assisted levitation. Figure 10.6 gives an example of a possible configuration. The material used may be derived from recent research such as graphene [TUR 17, BLA 18], and the technology may be assisted by a gas that is lighter than air, such as helium. Drones aid the system’s vertical movement or ascent. Depending on the overall weight of the set, levitation can be facilitated by managing the gas placed under the cockpit and handled by a human operator from a control panel. The exercise therefore consists of carrying out the technical feasibility study of such a system.

Active Pedagogy for Innovation in Transport

Plaorm for 1 to N cooperave

321

Control panel for the levitaon

Storage area for gas lighter Figure 10.6. Example of the implementation of assisted levitation

10.7. Conclusion This chapter has offered examples of active pedagogy in which students were actively involved in the training modules. These were oriented towards the design of innovative transport systems. In stage one, the first two examples referred to two awareness modules regarding the risks of operations or uses of systems and took place during two master’s degree courses from the University of Valenciennes. The first one concerned the analysis of an accident and contributed to the design of collision avoidance systems. The second one was related to the analysis of the use and operation of a support system, by identifying intention conflicts, known as dissonances. Dissonance analysis was intended as a contribution to the design of specific knowledge that could be implemented in future driver assistance tools, for example. The third and fourth examples applied the principles of apprenticeship training through projects and were the result of work done by a group of five first-year master’s degree students. They were related to the simulation of an accident based on the concept of dissonance and the design of support tools based on the principle of human-assisted or human-augmented automation. The last example offered a new approach for the innovative design of a driving support system based on the assessment of the driver’s cognitive state and of transportation systems without imposing any technical constraints. The first four examples gave satisfactory results and revealed the students’ interest in active pedagogy. They enhanced the learner’s initiative while favoring knowledge enrichment and collaborative work. These results are important input points for future research on the development of pedagogical skills to be implemented in support systems. They will be exploited

322

Automation Challenges of Socio-technical Systems

within the framework of the CONPETISES regional project (in French, “Contrôle pédagogique de tâches de conduite par systèmes automatisés”, or “Pedagogical Control of Driving Tasks by Automated Systems”) financed by the Hauts-de-France region and supported by GIS GRAISyHM (Group of Scientific Interest in Integrated Automation and Human–machine Systems). The modules proposed in this chapter not only helped learners become aware of design-related research issues in automated systems in transport, but also helped them to identify and assess the cognitive biases associated with their use. For example, they justified the interest of studying important challenges such as the placebo or nocebo effects of automation, as well as potential opportunities and threats of automated tool use [VAN 18b]. Moreover, compared to the results obtained in [VAN 19], they made it possible to debate about a new principle to be applied during the design, analysis or assessment of socio-technical systems: what the user is looking at does not necessarily match what they have in mind. Finally, they highlighted other concepts considering the analysis or evaluation of human activities, as a dynamic support for automated systems. Thus, automation can be assisted in real time by human decisions, and conversely, these decisions can be optimized using technical systems [VAN 17b]. This requires in-depth studies on symbiosis between humans and machines in socio-technical systems based on technical, human, organizational, environmental, social, legal or ethical criteria. 10.8. References [BEA 05] BUREAU D’ENQUETES SUR LES ACCIDENTS DE TRANSPORTS TERRESTRES, Rapport d’enquête technique sur la collision survenue le 9 juin 2005 au passage à niveau 83 à Saint Laurent Blangy (62), Report no. BEATT-2005-007, 2005. [BLA 18] BLANCHOT V., “Le MIT a développé une technique pour produire du graphène à grande échelle”, Siècle Digital, 24 April 2018, available at: https://siecledigital.fr/2018 /04/24/le-mit-a-developpe-une-technique-pour-produire-du-graphene-a-grande-echelle/. [CAC 18] CACHON D., Pédalez dans le turfu, Redbull.com #VTT, 1 March 2018, available at: https://www.redbull.com/fr-fr/velos-futur-cyclotron-audi-ebike-yikebike. [DUC 18] DUCAMP P., “Une voiture volante présentée au salon automobile de Genève”, BFMTV, 8 March 2018, available at: https://auto.bfmtv.com/actualite/une-voiture-volantepresentee-au-salon-automobile-de-geneve-1390572.html. [DRO 18] DROUILLET L., STEFANIAK N., DECLERCQ C. et al., “Role of implicit learning abilities in metaphor understanding”, Consciousness and Cognition, vol. 61, pp. 13–23, 2018.

Active Pedagogy for Innovation in Transport

323

[ELH 17] EL HASSANI J., “Notre Hyperloop parcourra son premier kilomètre à Toulouse en septembre 2018”, Journal du Net, 5 December 2017, available at: https://www.journaldunet.com/economie/transport/1205744-hyperloop-transportationtechnologies-toulouse-2018/. [ENJ 17] ENJALBERT S., VANDERHAEGEN F., “A hybrid reinforced learning system to estimate resilience indicators”, Engineering Applications of Artificial Intelligence, vol. 64, pp. 95–301, 2017. [FON 18] FONTAINE E., “Les voitures volantes : où en est-on ?”, Les numériques, 22 March 2018, available at: https://www.lesnumeriques.com/voiture/voitures-volantesen-est-on-a3631.html. [GAV 17] GAVOIS S., “Accident mortel : le NTSB pointe du doigt le pilote automatique de Tesla”, Nextimpact, 18 September 2017, available at: https://www.nextinpact.com /news/105160-accident-mortel-ntsb-pointe-doigt-pilote-automatique-tesla.htm. [GEO 11] GEORGE P., THOUVENIN I., FREMONT I. et al., “Réalité augmentée pour l’aide à la conduite intégrant l’observation du conducteur”, 6e Journées de l’AFRV, Biarritz, France, October 2011. [GOO 05] GOODNOUGH K., “Issues in modified problem-based learning: A self-study in pre‐service science‐teacher education”, Canadian Journal of Science, Mathematics and Technology Education, vol. 5, no. 3, pp. 289–306, 2005. [GUI 17] GUIBAULT M., VIAU-GUAY A., “La classe inversée comme approche pédagogique en enseignement supérieur : état des connaissances scientifiques et recommandations”, Revue internationale de pédagogie de l’enseignement supérieur, vol. 33, no. 1, 2017, available at: http://ripes.revues.org/1193. [HAM 18] HAMANI L., WOJAK P., DPSENCE D. et al., “Outils numériques pour la pédagogie innovante dans les transports”, 21e congrès de maîtrise des risques et de sûreté de fonctionnement (Lambda Mu 21), Reims, France, 1–18 October 2018. [HER 16] HERTEL O., “Cyclotron bike : le vélo sans rayons et avec boîte auto”, Sciences et Avenir, 17 July 2016, available at: https://www.sciencesetavenir.fr/high -tech/transports/cyclotron-bike-le-velo-sans-rayons-et-avec-boite-auto_103834. [HOM 18] HOMBERT L., SION S., LA DELFA S. et al., “Contrôle mutuel pour l’aide à l’écoconduite sûre et ponctuelle en simulation ferroviaire”, 21e congrès de maîtrise des risques et de sûreté de fonctionnement (Lambda Mu 21), Reims, France, 1–18 October 2018. [JOU 03] JOUGLET D., PIECHOWIAK S., VANDERHAEGEN F., “A shared workspace to support man-machine reasoning: application to cooperative distant diagnosis”, Cognition, Technology and Work, vol. 5, pp. 127–139, 2003. [LEB 07] LEBRUN M., “Quelques méthodes pédagogiques actives”, in LEBRUN M. (ed.), Théories et méthodes pédagogiques pour enseigner et apprendre. Quelle place pour les TIC dans l’éducation, pp. 123–168, De Boeck, Louvain-la-Neuve, 2007.

324

Automation Challenges of Socio-technical Systems

[LEC 18] LECOMTE E., “Accident mortel de Uber : faut-il avoir peur de la voiture autonome ?”, Sciences et Avenir, 22 March 2018, available at: https://www.sciencesetavenir.fr/high-tech/transports/accident-mortel-de-uber-faut-il-avoir -peur-de-la-voiture-autonome_122265. [LYN 17] LYNCH H.J., FISHER-ARI R.T., “Metaphor as pedagogy in teacher education”, Teaching and Teacher Education, vol. 66, pp. 195–203, 2017. [PHA 16] PHAN M.T., Estimation of driver awareness of pedestrian for an augmented reality advanced driving assistance system, PhD thesis, University of Technology of Compiègne, 2016. [POL 12] POLET P., VANDERHAEGEN F., ZIEBA S., “Iterative learning control based tools to learn from human error”, Engineering Applications of Artificial Intelligence, vol. 25, no. 7, pp. 1515–1522, 2012. [QIU 17] QIU S., RACHEDI N., SALLAK M. et al., “A quantitative model for the risk evaluation of driver-ADAS systems under uncertainty”, Reliability Engineering and Safety System, vol. 167, pp. 184–191, 2017. [RAD 06] RADIN D., HAYSSEN G., EMOTO M. et al., “Double-blind test of the effects of distant intention on water crystal formation”, EXPLORE: The Journal of Science and Healing, vol. 2, no. 5, pp. 408–411, 2006. [RAN 17] RANGRA S., SALLAK M., SCHÖN W. et al., “A graphical model based on performance shaping factors for assessing human reliability”, IEEE Transactions on Reliability, vol. 66, no. 4, pp. 1120–1143, 2017. [RAY 18] RAYNAUD C., HAUSSY M., “Exclusif : les premiers tubes de la piste d’essais de l’Hyperloop sont arrivés à Toulouse”, La Dépêche, 11 April 2018, available at: https://www.ladepeche.fr/article/2018/04/11/2778158-exclusif-premiers-tubes-piste-essais -hyperloop-sont-arrives-toulouse.html. [ROZ 18] ROZIERES G., “Crash mortel d’une Tesla en AutoPilot : ce qu’il s’est probablement passé lors de l’accident”, Le HUFFPOST, 03 April 2018, available at: https://www.huffingtonpost.fr/2018/04/03/crash-mortel-dune-tesla-en-autopilot-ce-quil -sest-probablement-passe-lors-de-laccident_a_23401452/. [SAL 16] SALOMON R., RONCHI R., DÖNZ J. et al., “The insula mediates access to awareness of visual stimuli presented synchronously to the heartbeat”, Journal of Neuroscience, vol. 36, no. 18, pp. 5115–5127, 2016. [THI 17] THIBODEAU P.H., HENDRICKS R.K., BORODOTSKY J., “How linguistic metaphor scaffolds reasoning”, Trends in Cognitive Sciences, vol. 21, no. 11, pp. 852–863, 2017. [TUR 17] TURPIN A., “Le Graphène, ce matériau révolutionnaire qui pourrait nous fournir une énergie propre et infinie”, Capital, 06 December 2017, available at: https://www.capital.fr/economie-politique/le-graphene-ce-materiau-revolutionnairequi-pourrait-nous-fournir-une-energie-propre-et-infinie-1259271.

Active Pedagogy for Innovation in Transport

325

[VAN 94] VANDERHAEGEN F., CRÉVITS I., DEBERNARD S. et al., “Human-machine cooperation: toward an activity regulation assistance for different air traffic control levels”, International Journal on Human–Computer Interaction, vol. 6, no. 1, pp. 65–104, 1994. [VAN 99a] VANDERHAEGEN F., “Toward a model of unreliability to study error prevention supports”, Interacting With Computers, vol. 11, pp. 575–595, 1999. [VAN 99b] VANDERHAEGEN F., “Multilevel allocation modes – Allocator control policies to share tasks between human and computer”, System Analysis Modelling Simulation, vol. 35, pp. 191–213, 1999. [VAN 03] VANDERHAEGEN F., Analyse et contrôle de l'erreur humaine, Hermès-Lavoisier, Paris, 2003. [VAN 04] VANDERHAEGEN F., JOUGLET D., PIECHOWIAK S., “Human-reliability analysis of cooperative redundancy to support diagnosis”, IEEE Transactions on Reliability, vol. 53, pp. 458–464, 2004. [VAN 12] VANDERHAEGEN F., “Rail simulations to study human reliability”, in WILSON J.R., MILLS A., CLARKE T. et al. (eds), Rail Human Factors Around the World – Impacts on and of People for Successful Rail Operations, pp. 126–131, Taylor & Francis, London, 2012. [VAN 14a] VANDERHAEGEN F., “Dissonance engineering: a new challenge to analyse risky knowledge when using a system”, International Journal of Computers Communications & Control, vol. 9, no. 6, pp. 750–759, 2014. [VAN 14b] VANDERHAEGEN F., RICHARD P., “MissRail: a platform dedicated to training and research in railway systems”, Proceedings of the International Conference HCII, pp. 544–549, Heraklion, Greece, 22–27 June 2014. [VAN 16a] VANDERHAEGEN F., “A rule-based support system for dissonance discovery and control applied to car driving”, Expert Systems with Applications, vol. 65, pp. 361–371, 2016. [VAN 16b] VANDERHAEGEN F., “Mirror effect based learning systems to predict human errors – Application to the Air Traffic Control”, IFAC-PapersOnLine, vol. 49, no. 19, pp. 295–300, 2016. [VAN 17a] VANDERHAEGEN F., CARSTEN O., “Can dissonance engineering improve risk analysis of human-machine systems?”, Cognition Technology & Work, vol. 19, no. 1, pp. 1–12, 2017. [VAN 17b] VANDERHAEGEN F., “Toward increased systems resilience: new challenges based on dissonance control for human reliability in Cyber-Physical & Human Systems”, Annual Reviews in Control, vol. 44, pp. 316–322, 2017. [VAN 18a] VANDERHAEGEN F., JIMENEZ V., “The amazing human factors and their dissonances for autonomous Cyber-Physical & Human Systems”, 1st IEEE Conference on Industrial Cyber-Physical Systems, Saint-Petersburg, Russia, 14–18 May 2018.

326

Automation Challenges of Socio-technical Systems

[VAN 18b] VANDERHAEGEN F., “Dissonances d’usages, opportunités et menaces: vers une démarche d’ingénierie cognitive de leur analyse”, Conférence ERGO-IA, Biarritz, France, 3–5 October 2018. [VAN 19] VANDERHAEGEN F., WOLFF M., MOLLARD R., “Synchronization of stimuli with heart rate: a new challenge to control attentional dissonances”, in VANDERHAGEEN F., MAAOUI C., BERDJAG D. et al. (eds), Automation Challenges of Socio-technical Systems, ISTE Ltd, London and John Wiley & Sons, New York, 2019. [WAL 17] WALTERS B., POTETZ J., HEATHER N., “Simulations in the classroom: an innovative active learning experience”, Clinical Simulation in Nursing, vol. 13, no. 12, pp. 609–615, 2017.

Conclusion

For or against the automation of sociotechnical systems? The debate is not settled because it depends heavily on the assumptions that each side advocates for. However, this book has offered a relevant inventory of the different points of view regarding automation. First of all, as human capabilities are limited, the study of the perceptive senses is essential to know the optimal or failing interaction contexts between humans and machines. Shared control or authority sharing makes it possible to combine human and technical skills in order to optimize the performance of a system and to facilitate the understanding of every decision-maker regarding the current situation and its treatment. Therefore, technical interaction support must be defined in order to guarantee this facilitation regardless of the operational context. In order to achieve such a goal, behavioral or decisional models are needed for identifying and dealing with technical failure or human errors, for instance, or for optimizing decision support. This book has offered a few of them. Finally, the last chapters of this book offered different examples of innovation in cooperative crisis management configurations or during the collective performance of pedagogical exercises. Will the machine replace humans and impose decisions on them without them being able to take part, or with the possibility of interacting but under certain conditions? Regardless of the field of application, the degrees of automation to be implemented recommend human actions in case of failure in the technical system. This assumes that human actors remain vigilant and attentive throughout the duration of an operation, or that their attention can be solicited by the activation of adequate alarms. Moreover, even though system automation can be justified by the number of accidents caused by human errors, no statistics exist on the rate of accidents or disasters avoided thanks to human intervention! Conclusion written by Frédéric VANDERHAEGEN, Choubeila MAAOUI, Mohamed SALLAK and Denis BERDJAG.

Automation Challenges of Socio-technical Systems, First Edition. Edited by Frédéric Vanderhaegen, Choubeila Maaoui, Mohamed Sallak and Denis Berdjag. © ISTE Ltd 2019. Published by ISTE Ltd and John Wiley & Sons, Inc.

328

Automation Challenges of Socio-technical Systems

Therefore, it is the position and the role of humans in sociotechnical systems that remains one of the central points of discussion and provokes divergences: their capacity to innovate and to create new systems, to cooperate with them, to use them and to modify or adapt them. However, while these capacities are variable and can be temporarily or permanently limited, automation makes it possible to increase them. The concept of the augmented human is still a challenge for acknowledging such variability. However, it would be a mistake to design technical systems without taking advantage of human intelligence. Thus, the concept of augmented or assisted automation could be developed in order to perfect the symbiosis between humans and machines, without implying alienation or servitude. The imagination or creativity of human operators sometimes enables them to define new unforeseen uses or to deflect the initial functions of technical systems. It would be interesting to think about the characteristics and the implementation of this innovation capability in machines and to wonder whether these could, by themselves, invent new uses or create new modes of interaction, or even make scientific discoveries! This book has shown to what extent the demonstration of symbiosis between humans and machines is difficult to implement and to sustain in the short, medium or long term. It has opened interesting perspectives for achieving such a fusion, which requires skills related to cognitive, engineering and social sciences.

List of Authors

Saïd Moh AHMAED IFSTTAR Villeneuve d’Ascq France

Mohamed Riad BOUKHARI Institut VEDECOM Versailles France

Cédric BACH Human Design Group Toulouse France

Moussa BOUKHNIFER ESTACA Saint-Quentin-en-Yvelines France

Denis BERDJAG Université Polytechnique Hauts-de-France Valenciennes France

Ahmed CHAIBET ESTACA Saint-Quentin-en-Yvelines France

Sonja BIEDE Airbus Operations Toulouse France Fabien BOUFFARON Airbus Defence & Space Toulouse France

Christine CHAUVIN Université Bretagne Sud Lorient France Serge DEBERNARD Université Polytechnique Hauts-de-France Valenciennes France

Automation Challenges of Socio-technical Systems, First Edition. Edited by Frédéric Vanderhaegen, Choubeila Maaoui, Mohamed Sallak and Denis Berdjag. © ISTE Ltd 2019. Published by ISTE Ltd and John Wiley & Sons, Inc.

330

Automation Challenges of Socio-technical Systems

Jean-Marc DUPONT Université de Lorraine Vandœuvre-lès-Nancy France

Gérard MOREL Université de Lorraine Vandœuvre-lès-Nancy France

Sébastien GLASER Queensland University of Technology Brisbane Australia

Marie-Pierre PACAUX-LEMOINE Université Polytechnique Hauts-de-France Valenciennes France

Saïd HAYAT IFSTTAR Villeneuve d’Ascq France

Raïssa POKAM MEGUIA Université Polytechnique Hauts-de-France Valenciennes France

Ludovic KOEHL ENSAIT Roubaix France Sabine LANGLOIS Technocentre Renault Guyancourt France Romain LIEBER Airbus Toulouse France Choubeila MAAOUI Université de Lorraine Metz France Frédérique MAYER Université de Lorraine Vandœuvre-lès-Nancy France Régis MOLLARD Paris Descartes University France

SUBEER RANGRA Université de technologie de Compiègne France Mohamed SALLAK Université de technologie de Compiègne France Walter SCHÖN Université de technologie de Compiègne France Guillaume TARTARE ENSAIT Roubaix France Frédéric VANDERHAEGEN Université Polytechnique Hauts-de-France Valenciennes France

List of Authors

Marion WOLFF Paris Descartes University France

Xianyi ZENG ENSAIT Roubaix France

331

Index

A, C, D accident, 3–5, 92, 94, 112, 124, 153–155, 180–183, 186, 195, 196, 205, 206, 209, 211–213, 220, 224, 261, 263, 308, 309, 321, 327 accountability, 85, 86, 88–92, 95–97, 100, 103, 104, 107, 108 active pedagogy, 307, 308, 321 aeronautics, see also pilot, air traffic controller, 11, 12, 85, 87, 90, 92, 100, 158 affordance, 36, 37 air traffic controller, 87, 90, 94, 103, 105 management (ATM), 83–91, 108 analysis of physiological data, 304 attention, 4–11, 91, 118, 120, 127, 182, 183, 213, 220, 281, 320, 327 authority, 83, 84–108, 327 automation assisted, 308, 316, 321 augmented, 316, 328 autonomous vehicle, 115, 118, 120–146, 156–158, 167, 169, 173, 176

autonomy, 6, 85, 91, 117, 121 cognition, 45, 199 cognitive conflict, 4, 7, 8 collaborative work, see also cooperation, 32, 41, 68, 73, 321 contradiction, 145, 313 control, 37, 63, 94, 156–159, 167, 233, 263, 264, 267, 268, 292, 301, 308, 316, 320 fault-tolerant, 157–159, 167 cooperation, see also collaborative work, 6, 84, 89, 92, 98, 104, 117, 118, 119, 124, 132, 179, 289, 290, 299–301, 303, 304 human–robot, 303 co-simulation, 44, 69, 70–72 decision support, 233, 234, 238, 239, 241, 243, 244, 250, 258, 259, 290 design innovative, 308, 319, 321 of support systems, 309 of systems, 4, 23, 281, 319, 321 dissonance, 3–9, 11, 312, 314, 321 attentional, see also attention, vigilance, 4, 7–9, 11 driver assistance, 155, 321

Automation Challenges of Socio-technical Systems, First Edition. Edited by Frédéric Vanderhaegen, Choubeila Maaoui, Mohamed Sallak and Denis Berdjag. © ISTE Ltd 2019. Published by ISTE Ltd and John Wiley & Sons, Inc.

334

Automation Challenges of Socio-technical Systems

E, H, I

O, P, R

effect mirror, 308, 316–318 tunnel, see also inattentional blindness, 8–11 ergonomy, 52, 104 expert, 182, 184–186, 190, 198–201, 214, 215, 222, 240 eye tracker, 274, 275 heart rate, 10, 11, 12, 15, 23, 287, 292–295, 297, 299, 320 variability, 296 human error, see also human reliability, 4, 5, 9, 179, 193, 195, 209, 220, 222, 224 factor(s), 175, 222, 261 –machine interface, 269 reliability, 4, 179, 182, 185–187, 195, 198, 201, 206, 222, 224 stability, 261, 262, 265, 266, 268 inattentional blindness, 8–11 industrialization, 68 Industry 4.0, 32, 73 innovation, 307, 308, 319, 327, 328 integrative physiology, 30, 36, 52 intention, 6, 31, 34, 45, 61, 116, 117, 144, 311, 313, 320, 321 interference, 314

opportunity, see also threat, 32, 62, 287 perception, 4, 6, 9, 22, 24, 29–31, 35–38, 46, 48, 50–52, 54, 55, 57–59, 61, 65, 69, 72, 112, 113, 124, 130, 134, 135, 140, 156, 194, 290, 320 /action, 30, 52, 54 performance, 4, 9, 14, 15, 17, 18, 23, 83, 84, 86, 89, 91, 101, 102, 104, 106, 108, 119, 126, 156, 179, 180–182, 186, 193, 195, 199, 243, 289, 297–299, 301, 303 performance shaping factor (PSF), 4, 11, 179, 193, 194, 180–183, 186, 187, 192–200, 202–209, 211–214, 217, 220–222, 224 physico-physiological interaction, 30, 31, 50, 52, 54, 61, 69 pilot, 91, 94, 101, 104–106, 107 PRELUDE, 186, 187, 193, 197, 201, 205, 209, 211, 220–224 railway sector, 182, 183, 193, 214, 215 regulation of human activity, 262 of inter-vehicle distance, 157, 169 of transport systems, 233 strategy, 246 reliability, 4, 103, 157, 158, 162, 164, 167, 170, 173, 179–181, 185–188, 195, 198, 201, 206, 214, 222, 224, 235, 241, 244, 259, 288, 304 resilience, 29, 180, 261–263, 266–268 responsibility, 83–92, 94–96, 98, 100, 101, 103–108, 127, 266

K, L, M knowledge, 29, 30, 32–34, 38–44, 46–52, 55–57, 59, 61, 68, 69, 73, 116, 130, 179, 189, 190, 234, 235, 240–242, 259, 275, 280, 318, 321 learning, 6, 11, 14, 73, 137, 289, 307, 308, 311, 316 model behavioral, 284 graphic, 179, 207

S, T, V Self-Assessment Manikin (SAM), 9, 12, 14, 15, 22, 23 SESAR program, 84, 86, 88, 90 simulation, 30, 38, 48, 68–70, 72, 73, 137, 157, 167, 234, 241, 258, 271, 273, 308, 311, 314, 315, 316, 321

Index

situational awareness, 112–114, 127, 134, 138–141, 145 stabilizability, 262, 267, 268, 281 supervision, 90, 103, 107, 112, 118, 125, 153, 269, 271, 273, 281, 282, 299–303, 310, 314 symbiosis, 73, 322, 328 system(s) engineering, 30, 33, 38, 61 human–machine, 261 of systems, 87 situation system, 29–32, 37–40, 42, 44, 48–51, 54, 57, 58, 61, 68, 69, 72 socio-technical, 29, 83, 86, 88, 99 transport, 233, 234, 261, 319, 321 systemic, 83, 86, 88, 100, 104, 108 Task Load Index (TLX), see also task load, 9, 12, 14, 15, 17, 18, 23 task/work load, time load, stress, 4–7, 9, 10, 22, 125, 194, 197, 207, 208, 266, 320

335

theory belief function, 181, 187, 188, 190, 191, 213, 219, 222 fuzzy subsets, 235, 239, 244, 259 threat, 117, 322 transparency, 111–119, 125, 132–134, 141, 144–146, 181 transport, 53, 83, 111, 126, 175, 181, 193, 197, 200, 233–241, 245–247, 249, 250, 258, 261, 262, 269, 271, 308, 319, 321 validation, 30, 42, 44, 49, 61, 65, 67–70, 72, 73, 118, 138, 183–186, 224, 258, 313 verification, 12, 42, 47, 66, 69, 220, 292 vigilance, 5, 7, 9–11, 320 visual analog scale (VAS), 9, 12, 14, 15, 18, 20, 23 voting algorithms, 157–159, 162, 170, 173, 175

Other titles from

in Systems and Industrial Engineering – Robotics

2019 BRIFFAUT Jean-Pierre From Complexity in the Natural Sciences to Complexity in Operations Management Systems (Systems of Systems Complexity Set – Volume 1)

2018 BERRAH Lamia, CLIVILLÉ Vincent, FOULLOY Laurent Industrial Objectives and Industrial Performance: Concepts and Fuzzy Handling GONZALEZ-FELIU Jesus Sustainable Urban Logistics: Planning and Evaluation GROUS Ammar Applied Mechanical Design LEROY Alain Production Availability and Reliability: Use in the Oil and Gas Industry

MARÉ Jean-Charles Aerospace Actuators 3: European Commercial Aircraft and Tiltrotor Aircraft MAXA Jean-Aimé, BEN MAHMOUD Mohamed Slim, LARRIEU Nicolas Model-driven Development for Embedded Software: Application to Communications for Drone Swarm MBIHI Jean Analog Automation and Digital Feedback Control Techniques Advanced Techniques and Technology of Computer-Aided Feedback Control MORANA Joëlle Logistics SIMON Christophe, WEBER Philippe, SALLAK Mohamed Data Uncertainty and Important Measures (Systems Dependability Assessment Set – Volume 3) TANIGUCHI Eiichi, THOMPSON Russell G. City Logistics 1: New Opportunities and Challenges City Logistics 2: Modeling and Planning Initiatives City Logistics 3: Towards Sustainable and Liveable Cities ZELM Martin, JAEKEL Frank-Walter, DOUMEINGTS Guy, WOLLSCHLAEGER Martin Enterprise Interoperability: Smart Services and Business Impact of Enterprise Interoperability

2017 ANDRÉ Jean-Claude From Additive Manufacturing to 3D/4D Printing 1: From Concepts to Achievements From Additive Manufacturing to 3D/4D Printing 2: Current Techniques, Improvements and their Limitations From Additive Manufacturing to 3D/4D Printing 3: Breakthrough Innovations: Programmable Material, 4D Printing and Bio-printing

ARCHIMÈDE Bernard, VALLESPIR Bruno Enterprise Interoperability: INTEROP-PGSO Vision CAMMAN Christelle, FIORE Claude, LIVOLSI Laurent, QUERRO Pascal Supply Chain Management and Business Performance: The VASC Model FEYEL Philippe Robust Control, Optimization with Metaheuristics MARÉ Jean-Charles Aerospace Actuators 2: Signal-by-Wire and Power-by-Wire POPESCU Dumitru, AMIRA Gharbi, STEFANOIU Dan, BORNE Pierre Process Control Design for Industrial Applications RÉVEILLAC Jean-Michel Modeling and Simulation of Logistics Flows 1: Theory and Fundamentals Modeling and Simulation of Logistics Flows 2: Dashboards, Traffic Planning and Management Modeling and Simulation of Logistics Flows 3: Discrete and Continuous Flows in 2D/3D

2016 ANDRÉ Michel, SAMARAS Zissis Energy and Environment (Research for Innovative Transports Set - Volume 1) AUBRY Jean-François, BRINZEI Nicolae, MAZOUNI Mohammed-Habib Systems Dependability Assessment: Benefits of Petri Net Models (Systems Dependability Assessment Set - Volume 1) BLANQUART Corinne, CLAUSEN Uwe, JACOB Bernard Towards Innovative Freight and Logistics (Research for Innovative Transports Set - Volume 2) COHEN Simon, YANNIS George Traffic Management (Research for Innovative Transports Set - Volume 3) MARÉ Jean-Charles Aerospace Actuators 1: Needs, Reliability and Hydraulic Power Solutions

REZG Nidhal, HAJEJ Zied, BOSCHIAN-CAMPANER Valerio Production and Maintenance Optimization Problems: Logistic Constraints and Leasing Warranty Services TORRENTI Jean-Michel, LA TORRE Francesca Materials and Infrastructures 1 (Research for Innovative Transports Set Volume 5A) Materials and Infrastructures 2 (Research for Innovative Transports Set Volume 5B) WEBER Philippe, SIMON Christophe Benefits of Bayesian Network Models (Systems Dependability Assessment Set – Volume 2) YANNIS George, COHEN Simon Traffic Safety (Research for Innovative Transports Set - Volume 4)

2015 AUBRY Jean-François, BRINZEI Nicolae Systems Dependability Assessment: Modeling with Graphs and Finite State Automata BOULANGER Jean-Louis CENELEC 50128 and IEC 62279 Standards BRIFFAUT Jean-Pierre E-Enabled Operations Management MISSIKOFF Michele, CANDUCCI Massimo, MAIDEN Neil Enterprise Innovation

2014 CHETTO Maryline Real-time Systems Scheduling Volume 1 – Fundamentals Volume 2 – Focuses DAVIM J. Paulo Machinability of Advanced Materials

ESTAMPE Dominique Supply Chain Performance and Evaluation Models FAVRE Bernard Introduction to Sustainable Transports GAUTHIER Michaël, ANDREFF Nicolas, DOMBRE Etienne Intracorporeal Robotics: From Milliscale to Nanoscale MICOUIN Patrice Model Based Systems Engineering: Fundamentals and Methods MILLOT Patrick Designing HumanMachine Cooperation Systems NI Zhenjiang, PACORET Céline, BENOSMAN Ryad, RÉGNIER Stéphane Haptic Feedback Teleoperation of Optical Tweezers OUSTALOUP Alain Diversity and Non-integer Differentiation for System Dynamics REZG Nidhal, DELLAGI Sofien, KHATAD Abdelhakim Joint Optimization of Maintenance and Production Policies STEFANOIU Dan, BORNE Pierre, POPESCU Dumitru, FILIP Florin Gh., EL KAMEL Abdelkader Optimization in Engineering Sciences: Metaheuristics, Stochastic Methods and Decision Support

2013 ALAZARD Daniel Reverse Engineering in Control Design ARIOUI Hichem, NEHAOUA Lamri Driving Simulation CHADLI Mohammed, COPPIER Hervé Command-control for Real-time Systems DAAFOUZ Jamal, TARBOURIECH Sophie, SIGALOTTI Mario Hybrid Systems with Constraints

FEYEL Philippe Loop-shaping Robust Control FLAUS Jean-Marie Risk Analysis: Socio-technical and Industrial Systems FRIBOURG Laurent, SOULAT Romain Control of Switching Systems by Invariance Analysis: Application to Power Electronics GROSSARD Mathieu, RÉGNIER Stéphane, CHAILLET Nicolas Flexible Robotics: Applications to Multiscale Manipulations GRUNN Emmanuel, PHAM Anh Tuan Modeling of Complex Systems: Application to Aeronautical Dynamics HABIB Maki K., DAVIM J. Paulo Interdisciplinary Mechatronics: Engineering Science and Research Development HAMMADI Slim, KSOURI Mekki Multimodal Transport Systems JARBOUI Bassem, SIARRY Patrick, TEGHEM Jacques Metaheuristics for Production Scheduling KIRILLOV Oleg N., PELINOVSKY Dmitry E. Nonlinear Physical Systems LE Vu Tuan Hieu, STOICA Cristina, ALAMO Teodoro, CAMACHO Eduardo F., DUMUR Didier Zonotopes: From Guaranteed State-estimation to Control MACHADO Carolina, DAVIM J. Paulo Management and Engineering Innovation MORANA Joëlle Sustainable Supply Chain Management SANDOU Guillaume Metaheuristic Optimization for the Design of Automatic Control Laws

STOICAN Florin, OLARU Sorin Set-theoretic Fault Detection in Multisensor Systems

2012 AÏT-KADI Daoud, CHOUINARD Marc, MARCOTTE Suzanne, RIOPEL Diane Sustainable Reverse Logistics Network: Engineering and Management BORNE Pierre, POPESCU Dumitru, FILIP Florin G., STEFANOIU Dan Optimization in Engineering Sciences: Exact Methods CHADLI Mohammed, BORNE Pierre Multiple Models Approach in Automation: Takagi-Sugeno Fuzzy Systems DAVIM J. Paulo Lasers in Manufacturing DECLERCK Philippe Discrete Event Systems in Dioid Algebra and Conventional Algebra DOUMIATI Moustapha, CHARARA Ali, VICTORINO Alessandro, LECHNER Daniel Vehicle Dynamics Estimation using Kalman Filtering: Experimental Validation GUERRERO José A, LOZANO Rogelio Flight Formation Control HAMMADI Slim, KSOURI Mekki Advanced Mobility and Transport Engineering MAILLARD Pierre Competitive Quality Strategies MATTA Nada, VANDENBOOMGAERDE Yves, ARLAT Jean Supervision and Safety of Complex Systems POLER Raul et al. Intelligent Non-hierarchical Manufacturing Networks TROCCAZ Jocelyne Medical Robotics

YALAOUI Alice, CHEHADE Hicham, YALAOUI Farouk, AMODEO Lionel Optimization of Logistics ZELM Martin et al. Enterprise Interoperability –I-EASA12 Proceedings

2011 CANTOT Pascal, LUZEAUX Dominique Simulation and Modeling of Systems of Systems DAVIM J. Paulo Mechatronics DAVIM J. Paulo Wood Machining GROUS Ammar Applied Metrology for Manufacturing Engineering KOLSKI Christophe Human–Computer Interactions in Transport LUZEAUX Dominique, RUAULT Jean-René, WIPPLER Jean-Luc Complex Systems and Systems of Systems Engineering ZELM Martin, et al. Enterprise Interoperability: IWEI2011 Proceedings

2010 BOTTA-GENOULAZ Valérie, CAMPAGNE Jean-Pierre, LLERENA Daniel, PELLEGRIN Claude Supply Chain Performance / Collaboration, Alignement and Coordination BOURLÈS Henri, GODFREY K.C. Kwan Linear Systems BOURRIÈRES Jean-Paul Proceedings of CEISIE’09 CHAILLET Nicolas, REGNIER Stéphane Microrobotics for Micromanipulation

DAVIM J. Paulo Sustainable Manufacturing GIORDANO Max, MATHIEU Luc, VILLENEUVE François Product Life-Cycle Management / Geometric Variations LOZANO Rogelio Unmanned Aerial Vehicles / Embedded Control LUZEAUX Dominique, RUAULT Jean-René Systems of Systems VILLENEUVE François, MATHIEU Luc Geometric Tolerancing of Products

2009 DIAZ Michel Petri Nets / Fundamental Models, Verification and Applications OZEL Tugrul, DAVIM J. Paulo Intelligent Machining PITRAT Jacques Artificial Beings

2008 ARTIGUES Christian, DEMASSEY Sophie, NÉRON Emmanuel Resources–Constrained Project Scheduling BILLAUT Jean-Charles, MOUKRIM Aziz, SANLAVILLE Eric Flexibility and Robustness in Scheduling DOCHAIN Denis Bioprocess Control LOPEZ Pierre, ROUBELLAT François Production Scheduling THIERRY Caroline, THOMAS André, BEL Gérard Supply Chain Simulation and Management

2007 DE LARMINAT

Philippe Analysis and Control of Linear Systems

DOMBRE Etienne, KHALIL Wisama Robot Manipulators LAMNABHI Françoise et al. Taming Heterogeneity and Complexity of Embedded Control LIMNIOS Nikolaos Fault Trees

2006 FRENCH COLLEGE OF METROLOGY Metrology in Industry NAJIM Kaddour Control of Continuous Linear Systems

WILEY END USER LICENSE AGREEMENT Go to www.wiley.com/go/eula to access Wiley’s ebook EULA.